This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtedyu: Don't try to create VolumeSpec immediately after underlying PVC is being deleted
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-01-14 21:22
Elapsed12m8s
Revision041c43ec5eef2064a20669515877fd9e32b119b2
Refs 86670

No Test Failures!


Error lines from build-log.txt

... skipping 193 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.2:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.2:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 29 lines ...
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.2
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 36 lines ...
I0114 21:27:11.845633     156 checks.go:376] validating the presence of executable ebtables
I0114 21:27:11.845672     156 checks.go:376] validating the presence of executable ethtool
I0114 21:27:11.845697     156 checks.go:376] validating the presence of executable socat
I0114 21:27:11.845747     156 checks.go:376] validating the presence of executable tc
I0114 21:27:11.845787     156 checks.go:376] validating the presence of executable touch
I0114 21:27:11.845832     156 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0114 21:27:11.850789     156 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0114 21:27:11.851057     156 checks.go:618] validating kubelet version
I0114 21:27:11.931325     156 checks.go:128] validating if the service is enabled and active
I0114 21:27:11.943691     156 checks.go:201] validating availability of port 10250
I0114 21:27:11.943770     156 checks.go:201] validating availability of port 2379
I0114 21:27:11.943792     156 checks.go:201] validating availability of port 2380
... skipping 67 lines ...
I0114 21:27:18.058502     156 request.go:853] Got a Retry-After 1s response for attempt 4 to https://172.17.0.2:6443/healthz?timeout=10s
I0114 21:27:19.059365     156 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=10s  in 0 milliseconds
I0114 21:27:19.059450     156 request.go:853] Got a Retry-After 1s response for attempt 5 to https://172.17.0.2:6443/healthz?timeout=10s
I0114 21:27:20.059970     156 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=10s  in 0 milliseconds
I0114 21:27:20.060025     156 request.go:853] Got a Retry-After 1s response for attempt 6 to https://172.17.0.2:6443/healthz?timeout=10s
I0114 21:27:25.055072     156 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=10s  in 3994 milliseconds
I0114 21:27:25.557613     156 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0114 21:27:26.057895     156 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0114 21:27:26.558526     156 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=10s 500 Internal Server Error in 3 milliseconds
I0114 21:27:27.058100     156 round_trippers.go:443] GET https://172.17.0.2:6443/healthz?timeout=10s 200 OK in 2 milliseconds
I0114 21:27:27.058197     156 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 12.003433 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0114 21:27:27.064491     156 round_trippers.go:443] POST https://172.17.0.2:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 4 milliseconds
I0114 21:27:27.069167     156 round_trippers.go:443] POST https://172.17.0.2:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 3 milliseconds
... skipping 106 lines ...
I0114 21:27:33.276172     369 checks.go:376] validating the presence of executable ebtables
I0114 21:27:33.276205     369 checks.go:376] validating the presence of executable ethtool
I0114 21:27:33.276224     369 checks.go:376] validating the presence of executable socat
I0114 21:27:33.276249     369 checks.go:376] validating the presence of executable tc
I0114 21:27:33.276268     369 checks.go:376] validating the presence of executable touch
I0114 21:27:33.276300     369 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0114 21:27:33.283593     369 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0114 21:27:33.283837     369 checks.go:618] validating kubelet version
I0114 21:27:33.378854     369 checks.go:128] validating if the service is enabled and active
I0114 21:27:33.393835     369 checks.go:201] validating availability of port 10250
I0114 21:27:33.394043     369 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0114 21:27:33.394076     369 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 96 lines ...
I0114 21:27:33.268716     369 checks.go:376] validating the presence of executable ebtables
I0114 21:27:33.268755     369 checks.go:376] validating the presence of executable ethtool
I0114 21:27:33.268779     369 checks.go:376] validating the presence of executable socat
I0114 21:27:33.268808     369 checks.go:376] validating the presence of executable tc
I0114 21:27:33.268829     369 checks.go:376] validating the presence of executable touch
I0114 21:27:33.268869     369 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0114 21:27:33.278032     369 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 111 lines ...
Will run 4844 specs

Running in parallel across 25 nodes

Jan 14 21:28:17.157: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:28:17.160: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jan 14 21:28:17.176: INFO: Condition Ready of node kind-worker is false instead of true. Reason: KubeletNotReady, message: Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "kind-worker" not found
Jan 14 21:28:17.176: INFO: Condition Ready of node kind-worker2 is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule <nil>} {node.kubernetes.io/not-ready  NoExecute 2020-01-14 21:28:08 +0000 UTC}]. Failure
Jan 14 21:28:17.176: INFO: Unschedulable nodes:
Jan 14 21:28:17.176: INFO: -> kind-worker Ready=false Network=false Taints=[{node.kubernetes.io/not-ready  NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master
Jan 14 21:28:17.176: INFO: -> kind-worker2 Ready=false Network=false Taints=[{node.kubernetes.io/not-ready  NoSchedule <nil>} {node.kubernetes.io/not-ready  NoExecute 2020-01-14 21:28:08 +0000 UTC}] NonblockingTaints:node-role.kubernetes.io/master
Jan 14 21:28:17.176: INFO: ================================
Jan 14 21:28:47.181: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
... skipping 333 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 193 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPath]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 259 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:28:47.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-9983" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:28:47.530: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:175
Jan 14 21:28:47.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 41 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:28:47.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-469" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:28:47.587: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:28:47.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 252 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:63
[It] should not launch unsafe, but not explicitly enabled sysctls on the node
  test/e2e/common/sysctl.go:188
STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
STEP: Watching for error events or started pod
STEP: Checking that the pod was rejected
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:175
Jan 14 21:28:49.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3110" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node","total":-1,"completed":1,"skipped":10,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:28:49.548: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 70 lines ...
Jan 14 21:28:47.381: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pod-disks
Jan 14 21:28:49.567: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Pod Disks
  test/e2e/storage/pd.go:74
[It] should be able to delete a non-existent PD without error
  test/e2e/storage/pd.go:447
Jan 14 21:28:49.585: INFO: Only supported for providers [gce] (not skeleton)
[AfterEach] [sig-storage] Pod Disks
  test/e2e/framework/framework.go:175
Jan 14 21:28:49.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-disks-885" for this suite.


S [SKIPPING] [2.212 seconds]
[sig-storage] Pod Disks
test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [It]
  test/e2e/storage/pd.go:447

  Only supported for providers [gce] (not skeleton)

  test/e2e/storage/pd.go:448
------------------------------
... skipping 127 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 18 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      test/e2e/storage/testsuites/base.go:688
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json,application/vnd.kubernetes.protobuf\"","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:28:47.573: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 11 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:28:49.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2921" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":3,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:28:49.900: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 35 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:28:49.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6595" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret","total":-1,"completed":2,"skipped":16,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:28:49.979: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 191 lines ...
• [SLOW TEST:8.964 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:28:56.334: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:28:56.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 139 lines ...
• [SLOW TEST:8.156 seconds]
[sig-storage] Secrets
test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:10.502 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:28:57.877: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:28:57.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 44 lines ...
  test/e2e/common/runtime.go:38
    on terminated container
    test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:28:58.970: INFO: Only supported for providers [aws] (not skeleton)
... skipping 100 lines ...
STEP: Building a namespace api object, basename container-runtime
Jan 14 21:28:49.112: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 14 21:28:59.308: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 9 lines ...
  test/e2e/common/runtime.go:38
    on terminated container
    test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":16,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 174 lines ...
  test/e2e/common/runtime.go:38
    when running a container with a new image
    test/e2e/common/runtime.go:263
      should not be able to pull image from invalid registry [NodeConformance]
      test/e2e/common/runtime.go:369
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":1,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:02.491: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:29:02.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 51 lines ...
• [SLOW TEST:15.637 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:03.024: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:29:03.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 121 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl expose
  test/e2e/kubectl/kubectl.go:1297
    should create services for rc  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:03.187: INFO: Driver local doesn't support ext4 -- skipping
... skipping 61 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:03.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4120" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:03.314: INFO: Driver gluster doesn't support ext4 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:29:03.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 91 lines ...
• [SLOW TEST:19.451 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:06.838: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:175
Jan 14 21:29:06.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
• [SLOW TEST:17.023 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:22.831 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:10.138: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:29:10.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 139 lines ...
• [SLOW TEST:14.172 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:10.535: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 172 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":11,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:10.216 seconds]
[sig-node] ConfigMap
test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 149 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":1,"skipped":3,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:14.876: INFO: Distro debian doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:29:14.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 173 lines ...
• [SLOW TEST:28.345 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  test/e2e/apimachinery/garbage_collector.go:870
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":1,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 65 lines ...
• [SLOW TEST:30.137 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to update service type to NodePort listening on same port number but different protocols
  test/e2e/network/service.go:1527
------------------------------
{"msg":"PASSED [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols","total":-1,"completed":1,"skipped":0,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:17.414: INFO: Driver cinder doesn't support ntfs -- skipping
... skipping 45 lines ...
• [SLOW TEST:18.234 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: enough pods, replicaSet, percentage => should allow an eviction
  test/e2e/apps/disruption.go:151
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage =\u003e should allow an eviction","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:20.734: INFO: Driver gluster doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:29:20.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 33 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:20.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4054" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    files with FSGroup ownership should support (root,0644,tmpfs)
    test/e2e/common/empty_dir.go:62
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)","total":-1,"completed":3,"skipped":28,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 55 lines ...
• [SLOW TEST:16.233 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":16,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:23.182: INFO: Only supported for providers [azure] (not skeleton)
... skipping 66 lines ...
• [SLOW TEST:11.082 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:24.512: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:29:24.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 173 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":2,"skipped":34,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:25.225: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 21 lines ...
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:28:49.989: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:175
Jan 14 21:29:26.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7123" for this suite.


• [SLOW TEST:36.290 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 9 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:26.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6523" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":4,"skipped":23,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:26.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2071" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":4,"skipped":53,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:15.081 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:27.273: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 194 lines ...
• [SLOW TEST:18.126 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:28.308: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 91 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:28.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-16" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":-1,"completed":5,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:28.930: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:29:28.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 74 lines ...
test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  test/e2e/common/security_context.go:290
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    test/e2e/common/security_context.go:361
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 65 lines ...
• [SLOW TEST:8.956 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Subpath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 44 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":22,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:29:31.666: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename node-problem-detector
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/node/node_problem_detector.go:50
Jan 14 21:29:31.753: INFO: No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory''
[AfterEach] [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
  test/e2e/framework/framework.go:175
Jan 14 21:29:31.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-problem-detector-1917" for this suite.


S [SKIPPING] in Spec Setup (BeforeEach) [0.109 seconds]
[k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters]
test/e2e/framework/framework.go:680
  should run without error [BeforeEach]
  test/e2e/node/node_problem_detector.go:58

  No SSH Key for provider skeleton: 'error reading SSH key /root/.ssh/id_rsa: 'open /root/.ssh/id_rsa: no such file or directory''

  test/e2e/node/node_problem_detector.go:51
------------------------------
SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 93 lines ...
• [SLOW TEST:6.237 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:33.064: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 68 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:33.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5443" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":-1,"completed":6,"skipped":62,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:33.494: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 103 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should store data
      test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":1,"skipped":7,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:33.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-179" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes","total":-1,"completed":2,"skipped":9,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:33.749: INFO: Only supported for providers [azure] (not skeleton)
... skipping 125 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:36.818: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:175
Jan 14 21:29:36.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 74 lines ...
test/e2e/common/empty_dir.go:40
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:43
    volume on default medium should have the correct mode using FSGroup
    test/e2e/common/empty_dir.go:66
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":4,"skipped":32,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:36.956: INFO: Only supported for providers [openstack] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:29:36.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 209 lines ...
  test/e2e/kubectl/portforward.go:466
    that expects a client request
    test/e2e/kubectl/portforward.go:467
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:471
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":2,"skipped":15,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:7.728 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 159 lines ...
• [SLOW TEST:18.342 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:73
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":38,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:43.574: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:29:43.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 109 lines ...
      test/e2e/storage/testsuites/provisioning.go:173

      Distro debian doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:159
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root","total":-1,"completed":1,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:29:03.848: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 51 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:228
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:29:44.144: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 6 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:44.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4388" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:44.228: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
• [SLOW TEST:16.365 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:44.689: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:29:44.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 53 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:46.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9932" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:46.382: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:175
Jan 14 21:29:46.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 77 lines ...
• [SLOW TEST:20.406 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_downwardapi.go:105
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":32,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:29:47.716: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:47.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7785" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":4,"skipped":32,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:48.005: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 81 lines ...
      test/e2e/storage/testsuites/volume_expand.go:220

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:148
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup","total":-1,"completed":6,"skipped":24,"failed":0}
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:29:29.718: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 31 lines ...
• [SLOW TEST:18.901 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":7,"skipped":24,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:48.621: INFO: Distro debian doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:29:48.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 46 lines ...
• [SLOW TEST:10.200 seconds]
[k8s.io] Docker Containers
test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":61,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:53.847: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 262 lines ...
• [SLOW TEST:10.102 seconds]
[k8s.io] Docker Containers
test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":27,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:54.808: INFO: Only supported for providers [aws] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/framework/framework.go:175
Jan 14 21:29:54.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 35 lines ...
STEP: Destroying namespace "services-9821" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:692

•
------------------------------
{"msg":"PASSED [sig-network] Services should check NodePort out-of-range","total":-1,"completed":5,"skipped":36,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-windows] Windows volume mounts 
  test/e2e/windows/framework.go:28
Jan 14 21:29:55.210: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 118 lines ...
• [SLOW TEST:18.217 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:57
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":5,"skipped":55,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:55.265: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:29:55.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 109 lines ...
Jan 14 21:29:38.454: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:38.569: INFO: Exec stderr: ""
Jan 14 21:29:46.584: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-3051"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-3051"/host; echo host > "/var/lib/kubelet/mount-propagation-3051"/host/file] Namespace:mount-propagation-3051 PodName:hostexec-kind-worker-7rjbd ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 14 21:29:46.584: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:46.841: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3051 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:46.841: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:47.038: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Jan 14 21:29:47.041: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3051 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:47.041: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:47.367: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:47.438: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3051 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:47.438: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:47.770: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:47.832: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3051 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:47.832: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:48.227: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:48.231: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3051 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:48.231: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:48.521: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Jan 14 21:29:48.551: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3051 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:48.551: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:48.799: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Jan 14 21:29:48.815: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3051 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:48.815: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:49.085: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Jan 14 21:29:49.091: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3051 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:49.091: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:49.488: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:49.496: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3051 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:49.496: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:49.842: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:49.855: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3051 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:49.855: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:50.199: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Jan 14 21:29:50.211: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3051 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:50.211: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:50.591: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:50.595: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3051 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:50.595: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:50.991: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:51.000: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3051 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:51.000: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:51.313: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Jan 14 21:29:51.320: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3051 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:51.320: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:51.604: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:51.612: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3051 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:51.612: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:51.986: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:51.991: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-3051 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:51.991: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:52.374: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:52.383: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-3051 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:52.383: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:52.584: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:52.597: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-3051 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:52.597: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:52.839: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:52.844: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-3051 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:52.844: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:53.010: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Jan 14 21:29:53.014: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-3051 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 14 21:29:53.014: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:53.196: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jan 14 21:29:53.196: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-3051"/master/file` = master] Namespace:mount-propagation-3051 PodName:hostexec-kind-worker-7rjbd ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 14 21:29:53.196: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:53.389: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-3051"/slave/file] Namespace:mount-propagation-3051 PodName:hostexec-kind-worker-7rjbd ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 14 21:29:53.389: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jan 14 21:29:53.620: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-3051"/host] Namespace:mount-propagation-3051 PodName:hostexec-kind-worker-7rjbd ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jan 14 21:29:53.621: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 21 lines ...
• [SLOW TEST:67.948 seconds]
[k8s.io] [sig-node] Mount propagation
test/e2e/framework/framework.go:680
  should propagate mounts to the host
  test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":1,"skipped":1,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:55.326: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 160 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:55.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-131" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":-1,"completed":6,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:55.703: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 57 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:29:55.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7299" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:55.711: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 78 lines ...

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 14 21:29:51.881: INFO: File wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local from pod  dns-1301/dns-test-4e02b465-a4b5-47ba-860e-81dd3308756b contains '' instead of '10.96.255.211'
Jan 14 21:29:51.889: INFO: Lookups using dns-1301/dns-test-4e02b465-a4b5-47ba-860e-81dd3308756b failed for: [wheezy_udp@dns-test-service-3.dns-1301.svc.cluster.local]

Jan 14 21:29:56.904: INFO: DNS probes using dns-test-4e02b465-a4b5-47ba-860e-81dd3308756b succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:69.721 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:29:57.102: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 170 lines ...
  test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:249
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:250
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:03.471: INFO: Driver vsphere doesn't support ntfs -- skipping
... skipping 187 lines ...
• [SLOW TEST:66.510 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-instrumentation] Cadvisor
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:30:04.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cadvisor-3091" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Cadvisor should be healthy on every node.","total":-1,"completed":3,"skipped":46,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: no PDB =\u003e should allow an eviction","total":-1,"completed":2,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:29:15.171: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 135 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [sig-storage] Flexvolumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:30:06.624: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename flexvolume
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 217 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should store data
      test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:11.260: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:30:11.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 82 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:161
    should function for pod-Service: udp
    test/e2e/network/networking.go:172
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":-1,"completed":2,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:18.429 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:108
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":14,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:14.148: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 42 lines ...
• [SLOW TEST:59.330 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  test/e2e/apps/job.go:73
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":2,"skipped":7,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 8 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 115 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:20.762: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 133 lines ...
      Driver supports dynamic provisioning, skipping InlineVolume pattern

      test/e2e/storage/testsuites/base.go:688
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":3,"skipped":13,"failed":0}
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:29:54.032: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 12 lines ...
• [SLOW TEST:28.157 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:22.193: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 72 lines ...
• [SLOW TEST:34.461 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should support configurable pod resolv.conf
  test/e2e/network/dns.go:455
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod resolv.conf","total":-1,"completed":5,"skipped":39,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:22.496: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 252 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:362
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":3,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 89 lines ...
• [SLOW TEST:20.137 seconds]
[sig-api-machinery] Generated clientset
test/e2e/apimachinery/framework.go:23
  should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
  test/e2e/apimachinery/generated_clientset.go:103
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod","total":-1,"completed":4,"skipped":47,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:16.215 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  test/e2e/common/projected_secret.go:89
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]","total":-1,"completed":3,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:28.025: INFO: Only supported for providers [aws] (not skeleton)
... skipping 110 lines ...
Jan 14 21:30:28.116: INFO: pv is nil


S [SKIPPING] in Spec Setup (BeforeEach) [0.064 seconds]
[sig-storage] PersistentVolumes GCEPD
test/e2e/storage/utils/framework.go:23
  should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach [BeforeEach]
  test/e2e/storage/persistent_volumes-gce.go:139

  Only supported for providers [gce gke] (not skeleton)

  test/e2e/storage/persistent_volumes-gce.go:83
------------------------------
... skipping 114 lines ...
• [SLOW TEST:15.900 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":4,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 115 lines ...
• [SLOW TEST:18.153 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":19,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:40.360: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 13 lines ...
      test/e2e/storage/testsuites/subpath.go:377

      Distro debian doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:159
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":-1,"completed":1,"skipped":20,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:29:58.648: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 218 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":8,"skipped":30,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:41.348: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:30:41.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 150 lines ...
      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:148
------------------------------
SSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":89,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:30:10.324: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 55 lines ...
  test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":89,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:43.276: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:175
Jan 14 21:30:43.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 264 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [aws] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1645
------------------------------
... skipping 81 lines ...
Jan 14 21:30:02.309: INFO: PersistentVolumeClaim csi-hostpath7tmx8 found but phase is Pending instead of Bound.
Jan 14 21:30:04.323: INFO: PersistentVolumeClaim csi-hostpath7tmx8 found but phase is Pending instead of Bound.
Jan 14 21:30:06.368: INFO: PersistentVolumeClaim csi-hostpath7tmx8 found but phase is Pending instead of Bound.
Jan 14 21:30:08.375: INFO: PersistentVolumeClaim csi-hostpath7tmx8 found and phase=Bound (12.150218871s)
STEP: Expanding non-expandable pvc
Jan 14 21:30:08.398: INFO: currentPvcSize {{1048576 0} {<nil>} 1Mi BinarySI}, newSize {{1074790400 0} {<nil>}  BinarySI}
Jan 14 21:30:08.406: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:10.413: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:12.414: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:14.413: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:16.418: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:18.422: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:20.414: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:22.426: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:24.433: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:26.417: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:28.438: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:30.426: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:32.418: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:34.415: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:36.414: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:38.414: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jan 14 21:30:38.419: INFO: Error updating pvc csi-hostpath7tmx8 with persistentvolumeclaims "csi-hostpath7tmx8" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jan 14 21:30:38.420: INFO: Deleting PersistentVolumeClaim "csi-hostpath7tmx8"
Jan 14 21:30:38.423: INFO: Waiting up to 5m0s for PersistentVolume pvc-aaf90b4f-1289-46a9-9504-61ef19f20479 to get deleted
Jan 14 21:30:38.427: INFO: PersistentVolume pvc-aaf90b4f-1289-46a9-9504-61ef19f20479 found and phase=Bound (3.246516ms)
Jan 14 21:30:43.450: INFO: PersistentVolume pvc-aaf90b4f-1289-46a9-9504-61ef19f20479 was removed
STEP: Deleting sc
... skipping 43 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] volume-expand
    test/e2e/storage/testsuites/base.go:94
      should not allow expansion of pvcs without AllowVolumeExpansion property
      test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":6,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:44.107: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 82 lines ...
• [SLOW TEST:38.514 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":17,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:29:42.057: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 196 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":-1,"completed":6,"skipped":17,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl alpha client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/kubectl/kubectl.go:237
Jan 14 21:30:46.676: INFO: Could not find batch/v2alpha1, Resource=cronjobs resource, skipping test: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"Status", APIVersion:"v1"}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server could not find the requested resource", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc0018d5680), Code:404}}
[AfterEach] Kubectl run CronJob
  test/e2e/kubectl/kubectl.go:233
Jan 14 21:30:46.678: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-9264'
Jan 14 21:30:46.946: INFO: rc: 1
Jan 14 21:30:46.946: FAIL: Unexpected error:
    <exec.CodeExitError>: {
        Err: {
            s: "error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-9264:\nCommand stdout:\n\nstderr:\nError from server (NotFound): cronjobs.batch \"e2e-test-echo-cronjob-alpha\" not found\n\nerror:\nexit status 1",
        },
        Code: 1,
    }
    error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config delete cronjobs e2e-test-echo-cronjob-alpha --namespace=kubectl-9264:
    Command stdout:
    
    stderr:
    Error from server (NotFound): cronjobs.batch "e2e-test-echo-cronjob-alpha" not found
    
    error:
    exit status 1
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/framework.KubectlBuilder.ExecOrDie(0xc001d258c0, 0x0, 0xc0055eacf0, 0xc, 0x4, 0xc0055ccc60)
	test/e2e/framework/util.go:701 +0xbc
... skipping 64 lines ...
• [SLOW TEST:28.090 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:48.884: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/framework/framework.go:175
Jan 14 21:30:48.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 36 lines ...
• [SLOW TEST:80.767 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":38,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:49.714: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 170 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":7,"skipped":70,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
... skipping 69 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:377
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":3,"skipped":23,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:50.077: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 93 lines ...
• [SLOW TEST:10.188 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:50.559: INFO: Driver local doesn't support ext4 -- skipping
... skipping 95 lines ...
• [SLOW TEST:12.208 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":53,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:53.608: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 54 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:148
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":6,"failed":0}
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:30:39.675: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
• [SLOW TEST:16.680 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:30:56.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2022" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:8.114 seconds]
[k8s.io] Variable Expansion
test/e2e/framework/framework.go:680
  should allow substituting values in a volume subpath [sig-storage]
  test/e2e/common/expansion.go:161
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage]","total":-1,"completed":7,"skipped":53,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:57.863: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
... skipping 80 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":4,"skipped":28,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:59.742: INFO: Only supported for providers [openstack] (not skeleton)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:30:59.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 16 lines ...
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:30:59.745: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-87f218ac-fdbb-4be1-ad9e-b323dfc2bdbd
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jan 14 21:30:59.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2760" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:30:59.935: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 279 lines ...
Jan 14 21:30:32.564: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 14 21:30:32.564: INFO: Waiting for all frontend pods to be Running.
Jan 14 21:30:57.615: INFO: Waiting for frontend to serve content.
Jan 14 21:30:57.627: INFO: Trying to add a new entry to the guestbook.
Jan 14 21:30:57.643: INFO: Verifying that added entry can be retrieved.
Jan 14 21:30:57.658: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Jan 14 21:31:02.699: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config delete --grace-period=0 --force -f - --namespace=kubectl-1737'
Jan 14 21:31:03.106: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 14 21:31:03.106: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 14 21:31:03.106: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config delete --grace-period=0 --force -f - --namespace=kubectl-1737'
... skipping 26 lines ...
test/e2e/kubectl/framework.go:23
  Guestbook application
  test/e2e/kubectl/kubectl.go:388
    should create and stop a working application  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":5,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:20.526 seconds]
[sig-storage] Secrets
test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":57,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:04.645: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:31:04.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 106 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl logs
  test/e2e/kubectl/kubectl.go:1462
    should be able to retrieve and filter logs  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":5,"skipped":31,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:06.424: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 147 lines ...
• [SLOW TEST:11.569 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":8,"skipped":60,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:09.438: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:31:09.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 123 lines ...
test/e2e/storage/utils/framework.go:23
  CSI online volume expansion
  test/e2e/storage/csi_mock_volume.go:530
    should expand volume without restarting pod if attach=off, nodeExpansion=on
    test/e2e/storage/csi_mock_volume.go:545
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:09.455: INFO: Only supported for providers [aws] (not skeleton)
... skipping 62 lines ...
• [SLOW TEST:20.256 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":4,"skipped":35,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
... skipping 24 lines ...
      Driver "local" does not provide raw block - skipping

      test/e2e/storage/testsuites/volumes.go:101
------------------------------
SS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":2,"skipped":20,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:30:40.379: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 61 lines ...
  test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 13 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:31:11.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9324" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:11.126: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 73 lines ...
• [SLOW TEST:21.633 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":8,"skipped":71,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:11.681: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/framework/framework.go:175
Jan 14 21:31:11.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 70 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:688
[It] should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-2366
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2366 to expose endpoints map[]
Jan 14 21:30:43.516: INFO: Get endpoints failed (16.220856ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan 14 21:30:44.519: INFO: successfully validated that service endpoint-test2 in namespace services-2366 exposes endpoints map[] (1.019982592s elapsed)
STEP: Creating pod pod1 in namespace services-2366
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2366 to expose endpoints map[pod1:[80]]
Jan 14 21:30:48.656: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.130494755s elapsed, will retry)
Jan 14 21:30:53.812: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (9.285589618s elapsed, will retry)
Jan 14 21:30:58.863: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (14.336614622s elapsed, will retry)
... skipping 20 lines ...
• [SLOW TEST:31.465 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":7,"skipped":120,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:14.890: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 70 lines ...
Jan 14 21:30:46.153: INFO: Unable to read jessie_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:46.170: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:46.182: INFO: Unable to read jessie_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:46.186: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:46.213: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:46.224: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:46.286: INFO: Lookups using dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9442 wheezy_tcp@dns-test-service.dns-9442 wheezy_udp@dns-test-service.dns-9442.svc wheezy_tcp@dns-test-service.dns-9442.svc wheezy_udp@_http._tcp.dns-test-service.dns-9442.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9442.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9442 jessie_tcp@dns-test-service.dns-9442 jessie_udp@dns-test-service.dns-9442.svc jessie_tcp@dns-test-service.dns-9442.svc jessie_udp@_http._tcp.dns-test-service.dns-9442.svc jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc]

Jan 14 21:30:51.326: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:51.459: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:51.476: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:51.532: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:51.589: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
... skipping 5 lines ...
Jan 14 21:30:51.847: INFO: Unable to read jessie_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:51.858: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:51.868: INFO: Unable to read jessie_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:51.912: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:51.922: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:51.932: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:52.038: INFO: Lookups using dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9442 wheezy_tcp@dns-test-service.dns-9442 wheezy_udp@dns-test-service.dns-9442.svc wheezy_tcp@dns-test-service.dns-9442.svc wheezy_udp@_http._tcp.dns-test-service.dns-9442.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9442.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9442 jessie_tcp@dns-test-service.dns-9442 jessie_udp@dns-test-service.dns-9442.svc jessie_tcp@dns-test-service.dns-9442.svc jessie_udp@_http._tcp.dns-test-service.dns-9442.svc jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc]

Jan 14 21:30:56.291: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.294: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.298: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.301: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.310: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
... skipping 5 lines ...
Jan 14 21:30:56.350: INFO: Unable to read jessie_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.354: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.358: INFO: Unable to read jessie_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.363: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.366: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.371: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:30:56.406: INFO: Lookups using dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9442 wheezy_tcp@dns-test-service.dns-9442 wheezy_udp@dns-test-service.dns-9442.svc wheezy_tcp@dns-test-service.dns-9442.svc wheezy_udp@_http._tcp.dns-test-service.dns-9442.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9442.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9442 jessie_tcp@dns-test-service.dns-9442 jessie_udp@dns-test-service.dns-9442.svc jessie_tcp@dns-test-service.dns-9442.svc jessie_udp@_http._tcp.dns-test-service.dns-9442.svc jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc]

Jan 14 21:31:01.297: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.301: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.305: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.310: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.318: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
... skipping 5 lines ...
Jan 14 21:31:01.473: INFO: Unable to read jessie_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.488: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.509: INFO: Unable to read jessie_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.526: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.536: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.577: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:01.685: INFO: Lookups using dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9442 wheezy_tcp@dns-test-service.dns-9442 wheezy_udp@dns-test-service.dns-9442.svc wheezy_tcp@dns-test-service.dns-9442.svc wheezy_udp@_http._tcp.dns-test-service.dns-9442.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9442.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9442 jessie_tcp@dns-test-service.dns-9442 jessie_udp@dns-test-service.dns-9442.svc jessie_tcp@dns-test-service.dns-9442.svc jessie_udp@_http._tcp.dns-test-service.dns-9442.svc jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc]

Jan 14 21:31:06.336: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:06.424: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:06.501: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:06.565: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:06.650: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
... skipping 5 lines ...
Jan 14 21:31:07.328: INFO: Unable to read jessie_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:07.411: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:07.426: INFO: Unable to read jessie_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:07.441: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:07.486: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:07.600: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:08.102: INFO: Lookups using dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9442 wheezy_tcp@dns-test-service.dns-9442 wheezy_udp@dns-test-service.dns-9442.svc wheezy_tcp@dns-test-service.dns-9442.svc wheezy_udp@_http._tcp.dns-test-service.dns-9442.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9442.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9442 jessie_tcp@dns-test-service.dns-9442 jessie_udp@dns-test-service.dns-9442.svc jessie_tcp@dns-test-service.dns-9442.svc jessie_udp@_http._tcp.dns-test-service.dns-9442.svc jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc]

Jan 14 21:31:11.353: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:11.406: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:11.469: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:11.556: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:11.606: INFO: Unable to read wheezy_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
... skipping 5 lines ...
Jan 14 21:31:12.033: INFO: Unable to read jessie_udp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:12.112: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442 from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:12.127: INFO: Unable to read jessie_udp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:12.139: INFO: Unable to read jessie_tcp@dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:12.197: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:12.303: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc from pod dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f: the server could not find the requested resource (get pods dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f)
Jan 14 21:31:12.560: INFO: Lookups using dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9442 wheezy_tcp@dns-test-service.dns-9442 wheezy_udp@dns-test-service.dns-9442.svc wheezy_tcp@dns-test-service.dns-9442.svc wheezy_udp@_http._tcp.dns-test-service.dns-9442.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9442.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9442 jessie_tcp@dns-test-service.dns-9442 jessie_udp@dns-test-service.dns-9442.svc jessie_tcp@dns-test-service.dns-9442.svc jessie_udp@_http._tcp.dns-test-service.dns-9442.svc jessie_tcp@_http._tcp.dns-test-service.dns-9442.svc]

Jan 14 21:31:17.277: INFO: DNS probes using dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:74.125 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:14.812 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/configmap_volume.go:72
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":67,"failed":0}
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:31:03.861: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 23 lines ...
test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  test/e2e/common/security_context.go:290
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    test/e2e/common/security_context.go:329
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":11,"skipped":67,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:20.481: INFO: Only supported for providers [aws] (not skeleton)
... skipping 61 lines ...
      Driver cinder doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:153
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":7,"skipped":66,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:31:07.009: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 30 lines ...
Jan 14 21:31:10.997: INFO: stdout: "NAMESPACE      NAME                  DESIRED   CURRENT   READY   AGE\nkubectl-8260   rc1mjn46qfpsp         1         1         0       0s\nproxy-8892     proxy-service-chhwb   1         1         0       4s\n"
Jan 14 21:31:11.163: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config get serviceaccounts --all-namespaces'
Jan 14 21:31:11.411: INFO: stderr: ""
Jan 14 21:31:11.411: INFO: stdout: "NAMESPACE                            NAME                                     SECRETS   AGE\nconfigmap-4054                       default                                  1         7s\ncrd-publish-openapi-681              default                                  1         21s\ncronjob-9015                         default                                  1         24s\ncsi-mock-volumes-6885                csi-attacher                             1         60s\ncsi-mock-volumes-6885                csi-mock                                 1         60s\ncsi-mock-volumes-6885                csi-provisioner                          1         60s\ncsi-mock-volumes-6885                csi-resizer                              1         60s\ncsi-mock-volumes-6885                default                                  1         60s\ncsi-mock-volumes-7542                default                                  1         2m24s\ncsi-mock-volumes-7850                csi-attacher                             1         15s\ncsi-mock-volumes-7850                csi-mock                                 1         15s\ncsi-mock-volumes-7850                csi-provisioner                          1         15s\ncsi-mock-volumes-7850                csi-resizer                              1         15s\ncsi-mock-volumes-7850                default                                  1         15s\ncsi-mock-volumes-8526                csi-attacher                             1         49s\ncsi-mock-volumes-8526                csi-mock                                 1         49s\ncsi-mock-volumes-8526                csi-provisioner                          1         49s\ncsi-mock-volumes-8526                csi-resizer                              1         49s\ncsi-mock-volumes-8526                default                                  1         49s\ncsi-mock-volumes-8822                csi-attacher                             1         74s\ncsi-mock-volumes-8822                csi-mock                                 1         74s\ncsi-mock-volumes-8822                csi-provisioner                          1         74s\ncsi-mock-volumes-8822                csi-resizer                              1         74s\ncsi-mock-volumes-8822                default                                  1         74s\ncustom-resource-definition-9324      default                                  1         1s\ndefault                              default                                  1         3m28s\ndns-9442                             default                                  1         68s\nephemeral-3610                       csi-attacher                             1         43s\nephemeral-3610                       csi-provisioner                          1         43s\nephemeral-3610                       csi-resizer                              1         43s\nephemeral-3610                       csi-snapshotter                          1         43s\nephemeral-3610                       default                                  1         43s\nephemeral-4014                       csi-attacher                             1         85s\nephemeral-4014                       csi-provisioner                          1         85s\nephemeral-4014                       csi-resizer                              1         85s\nephemeral-4014                       csi-snapshotter                          1         85s\nephemeral-4014                       default                                  1         85s\njob-3875                             default                                  1         21s\nkube-node-lease                      default                                  1         3m28s\nkube-public                          default                                  1         3m28s\nkube-system                          attachdetach-controller                  1         3m29s\nkube-system                          bootstrap-signer                         1         3m30s\nkube-system                          certificate-controller                   1         3m42s\nkube-system                          clusterrole-aggregation-controller       1         3m30s\nkube-system                          coredns                                  1         3m43s\nkube-system                          cronjob-controller                       1         3m42s\nkube-system                          daemon-set-controller                    1         3m43s\nkube-system                          default                                  1         3m28s\nkube-system                          deployment-controller                    1         3m29s\nkube-system                          disruption-controller                    1         3m30s\nkube-system                          endpoint-controller                      1         3m41s\nkube-system                          expand-controller                        1         3m42s\nkube-system                          generic-garbage-collector                1         3m41s\nkube-system                          horizontal-pod-autoscaler                1         3m43s\nkube-system                          job-controller                           1         3m30s\nkube-system                          kindnet                                  1         3m41s\nkube-system                          kube-proxy                               1         3m43s\nkube-system                          namespace-controller                     1         3m43s\nkube-system                          node-controller                          1         3m40s\nkube-system                          persistent-volume-binder                 1         3m29s\nkube-system                          pod-garbage-collector                    1         3m43s\nkube-system                          pv-protection-controller                 1         3m30s\nkube-system                          pvc-protection-controller                1         3m43s\nkube-system                          replicaset-controller                    1         3m28s\nkube-system                          replication-controller                   1         3m30s\nkube-system                          resourcequota-controller                 1         3m43s\nkube-system                          service-account-controller               1         3m42s\nkube-system                          service-controller                       1         3m30s\nkube-system                          statefulset-controller                   1         3m43s\nkube-system                          token-cleaner                            1         3m41s\nkube-system                          ttl-controller                           1         3m30s\nkubectl-1737                         default                                  1         41s\nkubectl-7013                         default                                  1         23s\nkubectl-8260                         default                                  1         4s\nkubectl-8260                         sa1namemjn46qfpsp                        2         0s\nkubectl-8440                         default                                  1         1s\nlocal-path-storage                   default                                  1         3m28s\nlocal-path-storage                   local-path-provisioner-service-account   1         3m39s\nnamespace1mjn46qfpsp                 default                                  1         2s\nnettest-4461                         default                                  1         10s\nnettest-9734                         default                                  1         49s\npersistent-local-volumes-test-2002   default                                  0         0s\npersistent-local-volumes-test-6760   default                                  1         31s\npersistent-local-volumes-test-7750   default                                  1         21s\npersistent-local-volumes-test-7782   default                                  1         2s\nprovisioning-4063                    csi-attacher                             1         7s\nprovisioning-4063                    csi-provisioner                          1         7s\nprovisioning-4063                    csi-resizer                              1         7s\nprovisioning-4063                    csi-snapshotter                          1         7s\nprovisioning-4063                    default                                  1         7s\nproxy-8892                           default                                  1         5s\nresourcequota-2756                   default                                  1         14s\nsecret-namespace-514                 default                                  0         27s\nsecrets-9495                         default                                  1         27s\nsecurity-context-test-7441           default                                  1         8s\nservices-2366                        default                                  1         28s\nservices-8847                        default                                  1         2s\nstatefulset-1314                     default                                  1         26s\nstatefulset-742                      default                                  1         12s\nvolume-174                           default                                  1         98s\nvolume-502                           default                                  1         1s\nvolume-6476                          default                                  1         100s\n"
Jan 14 21:31:11.544: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config get events --all-namespaces'
Jan 14 21:31:11.921: INFO: stderr: ""
Jan 14 21:31:11.921: INFO: stdout: "NAMESPACE                            LAST SEEN   TYPE      REASON                       OBJECT                                                      MESSAGE\nconfigmap-4054                       7s          Normal    Scheduled                    pod/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580     Successfully assigned configmap-4054/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580 to kind-worker\nconfigmap-4054                       4s          Normal    Pulled                       pod/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-4054                       4s          Normal    Created                      pod/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580     Created container configmap-volume-test\nconfigmap-4054                       3s          Normal    Started                      pod/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580     Started container configmap-volume-test\ncronjob-9015                         2s          Normal    Scheduled                    pod/forbid-1579037460-pjdnc                                 Successfully assigned cronjob-9015/forbid-1579037460-pjdnc to kind-worker\ncronjob-9015                         2s          Normal    SuccessfulCreate             job/forbid-1579037460                                       Created pod: forbid-1579037460-pjdnc\ncronjob-9015                         2s          Normal    SuccessfulCreate             cronjob/forbid                                              Created job forbid-1579037460\ncsi-mock-volumes-6885                58s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-6885                58s         Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-6885                58s         Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-6885                58s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-6885                58s         Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-6885                58s         Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-6885                58s         Normal    Pulling                      pod/csi-mockplugin-0                                        Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-6885                49s         Normal    Pulled                       pod/csi-mockplugin-0                                        Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-6885                49s         Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-6885                49s         Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-6885                59s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                               Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-6885                58s         Normal    Created                      pod/csi-mockplugin-attacher-0                               Created container csi-attacher\ncsi-mock-volumes-6885                58s         Normal    Started                      pod/csi-mockplugin-attacher-0                               Started container csi-attacher\ncsi-mock-volumes-6885                60s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-6885                60s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-6885                58s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-flqll                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-6885\" or manually created by system administrator\ncsi-mock-volumes-6885                48s         Normal    Provisioning                 persistentvolumeclaim/pvc-flqll                             External provisioner is provisioning volume for claim \"csi-mock-volumes-6885/pvc-flqll\"\ncsi-mock-volumes-6885                48s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-flqll                             Successfully provisioned volume pvc-00582b0e-ef2d-4613-8059-48f011f8578f\ncsi-mock-volumes-6885                47s         Warning   FailedAttachVolume           pod/pvc-volume-tester-2w7vb                                 AttachVolume.Attach failed for volume \"pvc-00582b0e-ef2d-4613-8059-48f011f8578f\" : cannot find NodeID for driver \"csi-mock-csi-mock-volumes-6885\" for node \"kind-worker\"\ncsi-mock-volumes-6885                46s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-2w7vb                                 AttachVolume.Attach succeeded for volume \"pvc-00582b0e-ef2d-4613-8059-48f011f8578f\"\ncsi-mock-volumes-6885                36s         Normal    Pulled                       pod/pvc-volume-tester-2w7vb                                 Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-6885                36s         Normal    Created                      pod/pvc-volume-tester-2w7vb                                 Created container volume-tester\ncsi-mock-volumes-6885                36s         Normal    Started                      pod/pvc-volume-tester-2w7vb                                 Started container volume-tester\ncsi-mock-volumes-6885                20s         Normal    Killing                      pod/pvc-volume-tester-2w7vb                                 Stopping container volume-tester\ncsi-mock-volumes-7542                2m18s       Normal    Pulling                      pod/csi-mockplugin-0                                        Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-7542                2m12s       Normal    Pulled                       pod/csi-mockplugin-0                                        Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-7542                2m12s       Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-7542                2m12s       Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-7542                2m12s       Normal    Pulling                      pod/csi-mockplugin-0                                        Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-7542                2m5s        Normal    Pulled                       pod/csi-mockplugin-0                                        Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-7542                2m4s        Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-7542                2m4s        Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-7542                2m4s        Normal    Pulling                      pod/csi-mockplugin-0                                        Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-7542                112s        Normal    Pulled                       pod/csi-mockplugin-0                                        Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-7542                112s        Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-7542                111s        Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-7542                2s          Normal    Killing                      pod/csi-mockplugin-0                                        Stopping container mock\ncsi-mock-volumes-7542                2s          Normal    Killing                      pod/csi-mockplugin-0                                        Stopping container driver-registrar\ncsi-mock-volumes-7542                2s          Normal    Killing                      pod/csi-mockplugin-0                                        Stopping container csi-provisioner\ncsi-mock-volumes-7542                2m18s       Normal    Pulling                      pod/csi-mockplugin-resizer-0                                Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-7542                2m9s        Normal    Pulled                       pod/csi-mockplugin-resizer-0                                Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-7542                2m8s        Normal    Created                      pod/csi-mockplugin-resizer-0                                Created container csi-resizer\ncsi-mock-volumes-7542                2m8s        Normal    Started                      pod/csi-mockplugin-resizer-0                                Started container csi-resizer\ncsi-mock-volumes-7542                2m20s       Normal    SuccessfulCreate             statefulset/csi-mockplugin-resizer                          create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-7542                2m20s       Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-7542                118s        Normal    ExternalProvisioning         persistentvolumeclaim/pvc-pfg98                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-7542\" or manually created by system administrator\ncsi-mock-volumes-7542                111s        Normal    Provisioning                 persistentvolumeclaim/pvc-pfg98                             External provisioner is provisioning volume for claim \"csi-mock-volumes-7542/pvc-pfg98\"\ncsi-mock-volumes-7542                111s        Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-pfg98                             Successfully provisioned volume pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\ncsi-mock-volumes-7542                101s        Warning   ExternalExpanding            persistentvolumeclaim/pvc-pfg98                             Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-7542                101s        Normal    Resizing                     persistentvolumeclaim/pvc-pfg98                             External resizer is resizing volume pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\ncsi-mock-volumes-7542                100s        Normal    FileSystemResizeRequired     persistentvolumeclaim/pvc-pfg98                             Require file system resize of volume on node\ncsi-mock-volumes-7542                21s         Normal    FileSystemResizeSuccessful   persistentvolumeclaim/pvc-pfg98                             MountVolume.NodeExpandVolume succeeded for volume \"pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\"\ncsi-mock-volumes-7542                107s        Normal    Pulled                       pod/pvc-volume-tester-gks4f                                 Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-7542                107s        Normal    Created                      pod/pvc-volume-tester-gks4f                                 Created container volume-tester\ncsi-mock-volumes-7542                107s        Normal    Started                      pod/pvc-volume-tester-gks4f                                 Started container volume-tester\ncsi-mock-volumes-7542                21s         Normal    FileSystemResizeSuccessful   pod/pvc-volume-tester-gks4f                                 MountVolume.NodeExpandVolume succeeded for volume \"pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\"\ncsi-mock-volumes-7542                20s         Normal    Killing                      pod/pvc-volume-tester-gks4f                                 Stopping container volume-tester\ncsi-mock-volumes-7850                14s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-7850                13s         Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-7850                13s         Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-7850                13s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-7850                13s         Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-7850                13s         Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-7850                13s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-7850                13s         Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-7850                13s         Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-7850                13s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                               Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-7850                13s         Normal    Created                      pod/csi-mockplugin-attacher-0                               Created container csi-attacher\ncsi-mock-volumes-7850                13s         Normal    Started                      pod/csi-mockplugin-attacher-0                               Started container csi-attacher\ncsi-mock-volumes-7850                15s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-7850                15s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-7850                13s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-pnvs8                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-7850\" or manually created by system administrator\ncsi-mock-volumes-7850                12s         Normal    Provisioning                 persistentvolumeclaim/pvc-pnvs8                             External provisioner is provisioning volume for claim \"csi-mock-volumes-7850/pvc-pnvs8\"\ncsi-mock-volumes-7850                12s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-pnvs8                             Successfully provisioned volume pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7\ncsi-mock-volumes-7850                10s         Warning   FailedAttachVolume           pod/pvc-volume-tester-98vdt                                 AttachVolume.Attach failed for volume \"pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7\" : cannot find NodeID for driver \"csi-mock-csi-mock-volumes-7850\" for node \"kind-worker2\"\ncsi-mock-volumes-7850                9s          Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-98vdt                                 AttachVolume.Attach succeeded for volume \"pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7\"\ncsi-mock-volumes-8526                47s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-8526                47s         Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-8526                47s         Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-8526                47s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-8526                47s         Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-8526                46s         Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-8526                46s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-8526                46s         Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-8526                46s         Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-8526                47s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                               Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-8526                47s         Normal    Created                      pod/csi-mockplugin-attacher-0                               Created container csi-attacher\ncsi-mock-volumes-8526                47s         Normal    Started                      pod/csi-mockplugin-attacher-0                               Started container csi-attacher\ncsi-mock-volumes-8526                49s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-8526                49s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-8526                49s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-mvk59                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-8526\" or manually created by system administrator\ncsi-mock-volumes-8526                46s         Normal    Provisioning                 persistentvolumeclaim/pvc-mvk59                             External provisioner is provisioning volume for claim \"csi-mock-volumes-8526/pvc-mvk59\"\ncsi-mock-volumes-8526                46s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-mvk59                             Successfully provisioned volume pvc-09d25c8b-fd4a-4dde-a94a-a87c21cc9d1e\ncsi-mock-volumes-8526                44s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-rbwgj                                 AttachVolume.Attach succeeded for volume \"pvc-09d25c8b-fd4a-4dde-a94a-a87c21cc9d1e\"\ncsi-mock-volumes-8526                33s         Normal    Pulled                       pod/pvc-volume-tester-rbwgj                                 Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-8526                33s         Normal    Created                      pod/pvc-volume-tester-rbwgj                                 Created container volume-tester\ncsi-mock-volumes-8526                32s         Normal    Started                      pod/pvc-volume-tester-rbwgj                                 Started container volume-tester\ncsi-mock-volumes-8526                20s         Normal    Killing                      pod/pvc-volume-tester-rbwgj                                 Stopping container volume-tester\ncsi-mock-volumes-8822                71s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-8822                71s         Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-8822                71s         Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-8822                71s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-8822                70s         Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-8822                70s         Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-8822                70s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-8822                70s         Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-8822                69s         Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-8822                71s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                               Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-8822                71s         Normal    Created                      pod/csi-mockplugin-attacher-0                               Created container csi-attacher\ncsi-mock-volumes-8822                70s         Normal    Started                      pod/csi-mockplugin-attacher-0                               Started container csi-attacher\ncsi-mock-volumes-8822                74s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-8822                74s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-8822                72s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-bhf4k                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-8822\" or manually created by system administrator\ncsi-mock-volumes-8822                68s         Normal    Provisioning                 persistentvolumeclaim/pvc-bhf4k                             External provisioner is provisioning volume for claim \"csi-mock-volumes-8822/pvc-bhf4k\"\ncsi-mock-volumes-8822                68s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-bhf4k                             Successfully provisioned volume pvc-17503600-c5e1-464b-86c7-33e9a8004ace\ncsi-mock-volumes-8822                28s         Warning   ExternalExpanding            persistentvolumeclaim/pvc-bhf4k                             Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-8822                67s         Warning   FailedAttachVolume           pod/pvc-volume-tester-ch6xl                                 AttachVolume.Attach failed for volume \"pvc-17503600-c5e1-464b-86c7-33e9a8004ace\" : cannot find NodeID for driver \"csi-mock-csi-mock-volumes-8822\" for node \"kind-worker2\"\ncsi-mock-volumes-8822                66s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-ch6xl                                 AttachVolume.Attach succeeded for volume \"pvc-17503600-c5e1-464b-86c7-33e9a8004ace\"\ncsi-mock-volumes-8822                54s         Normal    Pulled                       pod/pvc-volume-tester-ch6xl                                 Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-8822                54s         Normal    Created                      pod/pvc-volume-tester-ch6xl                                 Created container volume-tester\ncsi-mock-volumes-8822                53s         Normal    Started                      pod/pvc-volume-tester-ch6xl                                 Started container volume-tester\ndefault                              3m43s       Normal    Starting                     node/kind-control-plane                                     Starting kubelet.\ndefault                              3m43s       Warning   CheckLimitsForResolvConf     node/kind-control-plane                                     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\ndefault                              3m43s       Normal    NodeHasSufficientMemory      node/kind-control-plane                                     Node kind-control-plane status is now: NodeHasSufficientMemory\ndefault                              3m43s       Normal    NodeHasNoDiskPressure        node/kind-control-plane                                     Node kind-control-plane status is now: NodeHasNoDiskPressure\ndefault                              3m43s       Normal    NodeHasSufficientPID         node/kind-control-plane                                     Node kind-control-plane status is now: NodeHasSufficientPID\ndefault                              3m43s       Normal    NodeAllocatableEnforced      node/kind-control-plane                                     Updated Node Allocatable limit across pods\ndefault                              3m28s       Normal    RegisteredNode               node/kind-control-plane                                     Node kind-control-plane event: Registered Node kind-control-plane in Controller\ndefault                              3m26s       Normal    Starting                     node/kind-control-plane                                     Starting kube-proxy.\ndefault                              3m13s       Normal    NodeReady                    node/kind-control-plane                                     Node kind-control-plane status is now: NodeReady\ndefault                              3m9s        Normal    NodeHasSufficientPID         node/kind-worker                                            Node kind-worker status is now: NodeHasSufficientPID\ndefault                              3m8s        Normal    RegisteredNode               node/kind-worker                                            Node kind-worker event: Registered Node kind-worker in Controller\ndefault                              3m5s        Normal    Starting                     node/kind-worker                                            Starting kube-proxy.\ndefault                              3m10s       Normal    NodeHasNoDiskPressure        node/kind-worker2                                           Node kind-worker2 status is now: NodeHasNoDiskPressure\ndefault                              3m8s        Normal    RegisteredNode               node/kind-worker2                                           Node kind-worker2 event: Registered Node kind-worker2 in Controller\ndefault                              3m5s        Normal    Starting                     node/kind-worker2                                           Starting kube-proxy.\ndns-9442                             68s         Normal    Scheduled                    pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Successfully assigned dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f to kind-worker\ndns-9442                             64s         Normal    Pulling                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Pulling image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ndns-9442                             63s         Normal    Pulled                       pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ndns-9442                             63s         Normal    Created                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Created container webserver\ndns-9442                             63s         Normal    Started                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Started container webserver\ndns-9442                             62s         Normal    Pulling                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Pulling image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\ndns-9442                             60s         Normal    Pulled                       pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\ndns-9442                             60s         Normal    Created                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Created container querier\ndns-9442                             59s         Normal    Started                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Started container querier\ndns-9442                             59s         Normal    Pulling                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Pulling image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-9442                             52s         Normal    Pulled                       pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-9442                             51s         Normal    Created                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Created container jessie-querier\ndns-9442                             51s         Normal    Started                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Started container jessie-querier\nephemeral-3610                       42s         Normal    Pulled                       pod/csi-hostpath-attacher-0                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nephemeral-3610                       42s         Normal    Created                      pod/csi-hostpath-attacher-0                                 Created container csi-attacher\nephemeral-3610                       41s         Normal    Started                      pod/csi-hostpath-attacher-0                                 Started container csi-attacher\nephemeral-3610                       43s         Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-3610                       42s         Normal    Pulled                       pod/csi-hostpath-provisioner-0                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nephemeral-3610                       42s         Normal    Created                      pod/csi-hostpath-provisioner-0                              Created container csi-provisioner\nephemeral-3610                       41s         Normal    Started                      pod/csi-hostpath-provisioner-0                              Started container csi-provisioner\nephemeral-3610                       43s         Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-3610                       42s         Normal    Pulled                       pod/csi-hostpath-resizer-0                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nephemeral-3610                       42s         Normal    Created                      pod/csi-hostpath-resizer-0                                  Created container csi-resizer\nephemeral-3610                       41s         Normal    Started                      pod/csi-hostpath-resizer-0                                  Started container csi-resizer\nephemeral-3610                       43s         Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-3610                       42s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nephemeral-3610                       42s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container node-driver-registrar\nephemeral-3610                       41s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container node-driver-registrar\nephemeral-3610                       41s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nephemeral-3610                       41s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container hostpath\nephemeral-3610                       41s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container hostpath\nephemeral-3610                       41s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nephemeral-3610                       41s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container liveness-probe\nephemeral-3610                       40s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container liveness-probe\nephemeral-3610                       43s         Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-3610                       41s         Normal    Pulled                       pod/csi-snapshotter-0                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-3610                       41s         Normal    Created                      pod/csi-snapshotter-0                                       Created container csi-snapshotter\nephemeral-3610                       40s         Normal    Started                      pod/csi-snapshotter-0                                       Started container csi-snapshotter\nephemeral-3610                       43s         Normal    SuccessfulCreate             statefulset/csi-snapshotter                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-3610                       41s         Warning   FailedMount                  pod/inline-volume-tester-d6jf6                              MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-3610 not found in the list of registered CSI drivers\nephemeral-3610                       38s         Normal    Pulled                       pod/inline-volume-tester-d6jf6                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-3610                       38s         Normal    Created                      pod/inline-volume-tester-d6jf6                              Created container csi-volume-tester\nephemeral-3610                       38s         Normal    Started                      pod/inline-volume-tester-d6jf6                              Started container csi-volume-tester\nephemeral-3610                       27s         Normal    Killing                      pod/inline-volume-tester-d6jf6                              Stopping container csi-volume-tester\nephemeral-4014                       83s         Normal    Pulled                       pod/csi-hostpath-attacher-0                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nephemeral-4014                       83s         Normal    Created                      pod/csi-hostpath-attacher-0                                 Created container csi-attacher\nephemeral-4014                       82s         Normal    Started                      pod/csi-hostpath-attacher-0                                 Started container csi-attacher\nephemeral-4014                       85s         Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-4014                       83s         Normal    Pulled                       pod/csi-hostpath-provisioner-0                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nephemeral-4014                       82s         Normal    Created                      pod/csi-hostpath-provisioner-0                              Created container csi-provisioner\nephemeral-4014                       82s         Normal    Started                      pod/csi-hostpath-provisioner-0                              Started container csi-provisioner\nephemeral-4014                       85s         Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-4014                       82s         Normal    Pulled                       pod/csi-hostpath-resizer-0                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nephemeral-4014                       82s         Normal    Created                      pod/csi-hostpath-resizer-0                                  Created container csi-resizer\nephemeral-4014                       82s         Normal    Started                      pod/csi-hostpath-resizer-0                                  Started container csi-resizer\nephemeral-4014                       85s         Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-4014                       83s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nephemeral-4014                       83s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container node-driver-registrar\nephemeral-4014                       82s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container node-driver-registrar\nephemeral-4014                       82s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nephemeral-4014                       82s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container hostpath\nephemeral-4014                       81s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container hostpath\nephemeral-4014                       81s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nephemeral-4014                       81s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container liveness-probe\nephemeral-4014                       80s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container liveness-probe\nephemeral-4014                       85s         Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-4014                       82s         Normal    Pulled                       pod/csi-snapshotter-0                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-4014                       82s         Normal    Created                      pod/csi-snapshotter-0                                       Created container csi-snapshotter\nephemeral-4014                       81s         Normal    Started                      pod/csi-snapshotter-0                                       Started container csi-snapshotter\nephemeral-4014                       85s         Normal    SuccessfulCreate             statefulset/csi-snapshotter                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-4014                       83s         Warning   FailedMount                  pod/inline-volume-tester-vplmv                              MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-4014 not found in the list of registered CSI drivers\nephemeral-4014                       79s         Normal    Pulled                       pod/inline-volume-tester-vplmv                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-4014                       79s         Normal    Created                      pod/inline-volume-tester-vplmv                              Created container csi-volume-tester\nephemeral-4014                       79s         Normal    Started                      pod/inline-volume-tester-vplmv                              Started container csi-volume-tester\nephemeral-4014                       8s          Normal    Killing                      pod/inline-volume-tester-vplmv                              Stopping container csi-volume-tester\nephemeral-4014                       63s         Normal    Pulled                       pod/inline-volume-tester2-g5n6j                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-4014                       63s         Normal    Created                      pod/inline-volume-tester2-g5n6j                             Created container csi-volume-tester\nephemeral-4014                       62s         Normal    Started                      pod/inline-volume-tester2-g5n6j                             Started container csi-volume-tester\nephemeral-4014                       50s         Normal    Killing                      pod/inline-volume-tester2-g5n6j                             Stopping container csi-volume-tester\njob-3875                             21s         Normal    Scheduled                    pod/adopt-release-8j29c                                     Successfully assigned job-3875/adopt-release-8j29c to kind-worker\njob-3875                             19s         Normal    Pulled                       pod/adopt-release-8j29c                                     Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3875                             19s         Normal    Created                      pod/adopt-release-8j29c                                     Created container c\njob-3875                             19s         Normal    Started                      pod/adopt-release-8j29c                                     Started container c\njob-3875                             21s         Normal    Scheduled                    pod/adopt-release-g4qg2                                     Successfully assigned job-3875/adopt-release-g4qg2 to kind-worker\njob-3875                             19s         Normal    Pulled                       pod/adopt-release-g4qg2                                     Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3875                             19s         Normal    Created                      pod/adopt-release-g4qg2                                     Created container c\njob-3875                             19s         Normal    Started                      pod/adopt-release-g4qg2                                     Started container c\njob-3875                             2s          Normal    Scheduled                    pod/adopt-release-jg58t                                     Successfully assigned job-3875/adopt-release-jg58t to kind-worker\njob-3875                             21s         Normal    SuccessfulCreate             job/adopt-release                                           Created pod: adopt-release-8j29c\njob-3875                             21s         Normal    SuccessfulCreate             job/adopt-release                                           Created pod: adopt-release-g4qg2\njob-3875                             2s          Normal    SuccessfulCreate             job/adopt-release                                           Created pod: adopt-release-jg58t\nkube-system                          3m13s       Warning   FailedScheduling             pod/coredns-6955765f44-45qgf                                0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                          3m11s       Normal    Scheduled                    pod/coredns-6955765f44-45qgf                                Successfully assigned kube-system/coredns-6955765f44-45qgf to kind-control-plane\nkube-system                          3m10s       Normal    Pulled                       pod/coredns-6955765f44-45qgf                                Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                          3m9s        Normal    Created                      pod/coredns-6955765f44-45qgf                                Created container coredns\nkube-system                          3m9s        Normal    Started                      pod/coredns-6955765f44-45qgf                                Started container coredns\nkube-system                          3m13s       Warning   FailedScheduling             pod/coredns-6955765f44-blnrh                                0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                          3m11s       Normal    Scheduled                    pod/coredns-6955765f44-blnrh                                Successfully assigned kube-system/coredns-6955765f44-blnrh to kind-control-plane\nkube-system                          3m10s       Normal    Pulled                       pod/coredns-6955765f44-blnrh                                Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                          3m9s        Normal    Created                      pod/coredns-6955765f44-blnrh                                Created container coredns\nkube-system                          3m9s        Normal    Started                      pod/coredns-6955765f44-blnrh                                Started container coredns\nkube-system                          3m28s       Normal    SuccessfulCreate             replicaset/coredns-6955765f44                               Created pod: coredns-6955765f44-45qgf\nkube-system                          3m28s       Normal    SuccessfulCreate             replicaset/coredns-6955765f44                               Created pod: coredns-6955765f44-blnrh\nkube-system                          3m28s       Normal    ScalingReplicaSet            deployment/coredns                                          Scaled up replica set coredns-6955765f44 to 2\nkube-system                          3m28s       Normal    Scheduled                    pod/kindnet-2hf8t                                           Successfully assigned kube-system/kindnet-2hf8t to kind-control-plane\nkube-system                          3m26s       Normal    Pulled                       pod/kindnet-2hf8t                                           Container image \"kindest/kindnetd:0.5.4\" already present on machine\nkube-system                          3m25s       Normal    Created                      pod/kindnet-2hf8t                                           Created container kindnet-cni\nkube-system                          3m25s       Normal    Started                      pod/kindnet-2hf8t                                           Started container kindnet-cni\nkube-system                          3m9s        Normal    Scheduled                    pod/kindnet-6rhkp                                           Successfully assigned kube-system/kindnet-6rhkp to kind-worker\nkube-system                          3m8s        Normal    Pulled                       pod/kindnet-6rhkp                                           Container image \"kindest/kindnetd:0.5.4\" already present on machine\nkube-system                          3m6s        Normal    Created                      pod/kindnet-6rhkp                                           Created container kindnet-cni\nkube-system                          3m5s        Normal    Started                      pod/kindnet-6rhkp                                           Started container kindnet-cni\nkube-system                          3m10s       Normal    Scheduled                    pod/kindnet-jxzbl                                           Successfully assigned kube-system/kindnet-jxzbl to kind-worker2\nkube-system                          3m9s        Normal    Pulled                       pod/kindnet-jxzbl                                           Container image \"kindest/kindnetd:0.5.4\" already present on machine\nkube-system                          3m6s        Normal    Created                      pod/kindnet-jxzbl                                           Created container kindnet-cni\nkube-system                          3m5s        Normal    Started                      pod/kindnet-jxzbl                                           Started container kindnet-cni\nkube-system                          3m28s       Normal    SuccessfulCreate             daemonset/kindnet                                           Created pod: kindnet-2hf8t\nkube-system                          3m10s       Normal    SuccessfulCreate             daemonset/kindnet                                           Created pod: kindnet-jxzbl\nkube-system                          3m9s        Normal    SuccessfulCreate             daemonset/kindnet                                           Created pod: kindnet-6rhkp\nkube-system                          3m44s       Normal    LeaderElection               endpoints/kube-controller-manager                           kind-control-plane_271067d6-33cf-4930-9ba7-05996c920976 became leader\nkube-system                          3m44s       Normal    LeaderElection               lease/kube-controller-manager                               kind-control-plane_271067d6-33cf-4930-9ba7-05996c920976 became leader\nkube-system                          3m9s        Normal    Scheduled                    pod/kube-proxy-4md69                                        Successfully assigned kube-system/kube-proxy-4md69 to kind-worker\nkube-system                          3m9s        Normal    Pulled                       pod/kube-proxy-4md69                                        Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\" already present on machine\nkube-system                          3m6s        Normal    Created                      pod/kube-proxy-4md69                                        Created container kube-proxy\nkube-system                          3m6s        Normal    Started                      pod/kube-proxy-4md69                                        Started container kube-proxy\nkube-system                          3m28s       Normal    Scheduled                    pod/kube-proxy-rh967                                        Successfully assigned kube-system/kube-proxy-rh967 to kind-control-plane\nkube-system                          3m27s       Normal    Pulled                       pod/kube-proxy-rh967                                        Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\" already present on machine\nkube-system                          3m26s       Normal    Created                      pod/kube-proxy-rh967                                        Created container kube-proxy\nkube-system                          3m26s       Normal    Started                      pod/kube-proxy-rh967                                        Started container kube-proxy\nkube-system                          3m10s       Normal    Scheduled                    pod/kube-proxy-sllbk                                        Successfully assigned kube-system/kube-proxy-sllbk to kind-worker2\nkube-system                          3m9s        Normal    Pulled                       pod/kube-proxy-sllbk                                        Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\" already present on machine\nkube-system                          3m6s        Normal    Created                      pod/kube-proxy-sllbk                                        Created container kube-proxy\nkube-system                          3m6s        Normal    Started                      pod/kube-proxy-sllbk                                        Started container kube-proxy\nkube-system                          3m28s       Normal    SuccessfulCreate             daemonset/kube-proxy                                        Created pod: kube-proxy-rh967\nkube-system                          3m10s       Normal    SuccessfulCreate             daemonset/kube-proxy                                        Created pod: kube-proxy-sllbk\nkube-system                          3m9s        Normal    SuccessfulCreate             daemonset/kube-proxy                                        Created pod: kube-proxy-4md69\nkube-system                          3m44s       Normal    LeaderElection               endpoints/kube-scheduler                                    kind-control-plane_c401d2bf-7ea3-46fd-a507-5d6babc3a00c became leader\nkube-system                          3m44s       Normal    LeaderElection               lease/kube-scheduler                                        kind-control-plane_c401d2bf-7ea3-46fd-a507-5d6babc3a00c became leader\nkubectl-1737                         21s         Normal    Created                      pod/agnhost-slave-774cfc759f-n24sl                          Created container slave\nkubectl-1737                         20s         Normal    Started                      pod/agnhost-slave-774cfc759f-n24sl                          Started container slave\nkubectl-1737                         39s         Normal    SuccessfulCreate             replicaset/agnhost-slave-774cfc759f                         Created pod: agnhost-slave-774cfc759f-n24sl\nkubectl-1737                         39s         Normal    SuccessfulCreate             replicaset/agnhost-slave-774cfc759f                         Created pod: agnhost-slave-774cfc759f-j6wzs\nkubectl-1737                         39s         Normal    ScalingReplicaSet            deployment/agnhost-slave                                    Scaled up replica set agnhost-slave-774cfc759f to 2\nkubectl-1737                         40s         Normal    Scheduled                    pod/frontend-6c5f89d5d4-8j828                               Successfully assigned kubectl-1737/frontend-6c5f89d5d4-8j828 to kind-worker\nkubectl-1737                         39s         Normal    Pulled                       pod/frontend-6c5f89d5d4-8j828                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-1737                         38s         Normal    Created                      pod/frontend-6c5f89d5d4-8j828                               Created container guestbook-frontend\nkubectl-1737                         38s         Normal    Started                      pod/frontend-6c5f89d5d4-8j828                               Started container guestbook-frontend\nkubectl-1737                         40s         Normal    Scheduled                    pod/frontend-6c5f89d5d4-r7x9p                               Successfully assigned kubectl-1737/frontend-6c5f89d5d4-r7x9p to kind-worker2\nkubectl-1737                         38s         Normal    Pulled                       pod/frontend-6c5f89d5d4-r7x9p                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-1737                         38s         Normal    Created                      pod/frontend-6c5f89d5d4-r7x9p                               Created container guestbook-frontend\nkubectl-1737                         38s         Normal    Started                      pod/frontend-6c5f89d5d4-r7x9p                               Started container guestbook-frontend\nkubectl-1737                         40s         Normal    Scheduled                    pod/frontend-6c5f89d5d4-zgtz5                               Successfully assigned kubectl-1737/frontend-6c5f89d5d4-zgtz5 to kind-worker\nkubectl-1737                         39s         Normal    Pulled                       pod/frontend-6c5f89d5d4-zgtz5                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nkubectl-1737                         38s         Normal    Created                      pod/frontend-6c5f89d5d4-zgtz5                               Created container guestbook-frontend\nkubectl-1737                         38s         Normal    Started                      pod/frontend-6c5f89d5d4-zgtz5                               Started container guestbook-frontend\nkubectl-1737                         40s         Normal    SuccessfulCreate             replicaset/frontend-6c5f89d5d4                              Created pod: frontend-6c5f89d5d4-8j828\nkubectl-1737                         40s         Normal    SuccessfulCreate             replicaset/frontend-6c5f89d5d4                              Created pod: frontend-6c5f89d5d4-r7x9p\nkubectl-1737                         40s         Normal    SuccessfulCreate             replicaset/frontend-6c5f89d5d4                              Created pod: frontend-6c5f89d5d4-zgtz5\nkubectl-1737                         40s         Normal    ScalingReplicaSet            deployment/frontend                                         Scaled up replica set frontend-6c5f89d5d4 to 3\nkubectl-7013                         21s         Normal    Created                      pod/logs-generator                                          Created container logs-generator\nkubectl-7013                         21s         Normal    Started                      pod/logs-generator                                          Started container logs-generator\nkubectl-7013                         8s          Normal    Killing                      pod/logs-generator                                          Stopping container logs-generator\nkubectl-8260                         <unknown>                                                                                                      some data here\nkubectl-8260                         4s          Warning   FailedScheduling             pod/pod1mjn46qfpsp                                          0/3 nodes are available: 3 Insufficient cpu.\nkubectl-8260                         3s          Warning   FailedScheduling             pod/pod1mjn46qfpsp                                          skip schedule deleting pod: kubectl-8260/pod1mjn46qfpsp\nkubectl-8260                         2s          Normal    WaitForFirstConsumer         persistentvolumeclaim/pvc1mjn46qfpsp                        waiting for first consumer to be created before binding\nkubectl-8260                         1s          Normal    Scheduled                    pod/rc1mjn46qfpsp-dm6s9                                     Successfully assigned kubectl-8260/rc1mjn46qfpsp-dm6s9 to kind-worker\nkubectl-8260                         1s          Normal    SuccessfulCreate             replicationcontroller/rc1mjn46qfpsp                         Created pod: rc1mjn46qfpsp-dm6s9\nlocal-path-storage                   23s         Normal    Pulled                       pod/create-pvc-2a5ae533-7ba5-4a0c-a04a-5e5484bf85bb         Container image \"k8s.gcr.io/debian-base:v2.0.0\" already present on machine\nlocal-path-storage                   23s         Normal    Created                      pod/create-pvc-2a5ae533-7ba5-4a0c-a04a-5e5484bf85bb         Created container local-path-create\nlocal-path-storage                   22s         Normal    Started                      pod/create-pvc-2a5ae533-7ba5-4a0c-a04a-5e5484bf85bb         Started container local-path-create\nlocal-path-storage                   10s         Normal    Pulled                       pod/create-pvc-cb3c702b-655f-4cd1-b586-5b359f48624d         Container image \"k8s.gcr.io/debian-base:v2.0.0\" already present on machine\nlocal-path-storage                   10s         Normal    Created                      pod/create-pvc-cb3c702b-655f-4cd1-b586-5b359f48624d         Created container local-path-create\nlocal-path-storage                   10s         Normal    Started                      pod/create-pvc-cb3c702b-655f-4cd1-b586-5b359f48624d         Started container local-path-create\nlocal-path-storage                   3m13s       Warning   FailedScheduling             pod/local-path-provisioner-7745554f7f-9fxhw                 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nlocal-path-storage                   3m11s       Normal    Scheduled                    pod/local-path-provisioner-7745554f7f-9fxhw                 Successfully assigned local-path-storage/local-path-provisioner-7745554f7f-9fxhw to kind-control-plane\nlocal-path-storage                   3m10s       Normal    Pulled                       pod/local-path-provisioner-7745554f7f-9fxhw                 Container image \"rancher/local-path-provisioner:v0.0.11\" already present on machine\nlocal-path-storage                   3m9s        Normal    Created                      pod/local-path-provisioner-7745554f7f-9fxhw                 Created container local-path-provisioner\nlocal-path-storage                   3m9s        Normal    Started                      pod/local-path-provisioner-7745554f7f-9fxhw                 Started container local-path-provisioner\nlocal-path-storage                   3m28s       Normal    SuccessfulCreate             replicaset/local-path-provisioner-7745554f7f                Created pod: local-path-provisioner-7745554f7f-9fxhw\nlocal-path-storage                   3m28s       Normal    ScalingReplicaSet            deployment/local-path-provisioner                           Scaled up replica set local-path-provisioner-7745554f7f to 1\nlocal-path-storage                   3m9s        Normal    LeaderElection               endpoints/rancher.io-local-path                             local-path-provisioner-7745554f7f-9fxhw_bea625e9-3714-11ea-bdbd-0e6ef275eabc became leader\nnettest-4461                         10s         Normal    Scheduled                    pod/netserver-0                                             Successfully assigned nettest-4461/netserver-0 to kind-worker\nnettest-4461                         8s          Normal    Pulled                       pod/netserver-0                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4461                         7s          Normal    Created                      pod/netserver-0                                             Created container webserver\nnettest-4461                         7s          Normal    Started                      pod/netserver-0                                             Started container webserver\nnettest-4461                         10s         Normal    Scheduled                    pod/netserver-1                                             Successfully assigned nettest-4461/netserver-1 to kind-worker2\nnettest-4461                         8s          Normal    Pulled                       pod/netserver-1                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4461                         7s          Normal    Created                      pod/netserver-1                                             Created container webserver\nnettest-4461                         7s          Normal    Started                      pod/netserver-1                                             Started container webserver\nnettest-9734                         49s         Normal    Scheduled                    pod/netserver-0                                             Successfully assigned nettest-9734/netserver-0 to kind-worker\nnettest-9734                         47s         Normal    Pulled                       pod/netserver-0                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-9734                         47s         Normal    Created                      pod/netserver-0                                             Created container webserver\nnettest-9734                         47s         Normal    Started                      pod/netserver-0                                             Started container webserver\nnettest-9734                         49s         Normal    Scheduled                    pod/netserver-1                                             Successfully assigned nettest-9734/netserver-1 to kind-worker2\nnettest-9734                         47s         Normal    Pulled                       pod/netserver-1                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-9734                         47s         Normal    Created                      pod/netserver-1                                             Created container webserver\nnettest-9734                         47s         Normal    Started                      pod/netserver-1                                             Started container webserver\nnettest-9734                         19s         Normal    Scheduled                    pod/test-container-pod                                      Successfully assigned nettest-9734/test-container-pod to kind-worker2\nnettest-9734                         18s         Normal    Pulled                       pod/test-container-pod                                      Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-9734                         18s         Normal    Created                      pod/test-container-pod                                      Created container webserver\nnettest-9734                         17s         Normal    Started                      pod/test-container-pod                                      Started container webserver\npersistent-local-volumes-test-6760   30s         Normal    Pulled                       pod/hostexec-kind-worker-dz7v4                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-6760   30s         Normal    Created                      pod/hostexec-kind-worker-dz7v4                              Created container agnhost\npersistent-local-volumes-test-6760   30s         Normal    Started                      pod/hostexec-kind-worker-dz7v4                              Started container agnhost\npersistent-local-volumes-test-6760   19s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-nw5n2                             no volume plugin matched\npersistent-local-volumes-test-6760   13s         Normal    Scheduled                    pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Successfully assigned persistent-local-volumes-test-6760/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9 to kind-worker\npersistent-local-volumes-test-6760   11s         Normal    Pulled                       pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-6760   11s         Normal    Created                      pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Created container write-pod\npersistent-local-volumes-test-6760   10s         Normal    Started                      pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Started container write-pod\npersistent-local-volumes-test-6760   2s          Normal    Killing                      pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Stopping container write-pod\npersistent-local-volumes-test-7750   20s         Normal    Pulled                       pod/hostexec-kind-worker-j8r65                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-7750   20s         Normal    Created                      pod/hostexec-kind-worker-j8r65                              Created container agnhost\npersistent-local-volumes-test-7750   19s         Normal    Started                      pod/hostexec-kind-worker-j8r65                              Started container agnhost\nprovisioning-4063                    3s          Normal    Pulled                       pod/csi-hostpath-attacher-0                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nprovisioning-4063                    3s          Normal    Created                      pod/csi-hostpath-attacher-0                                 Created container csi-attacher\nprovisioning-4063                    2s          Normal    Started                      pod/csi-hostpath-attacher-0                                 Started container csi-attacher\nprovisioning-4063                    6s          Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nprovisioning-4063                    2s          Normal    Pulled                       pod/csi-hostpath-provisioner-0                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nprovisioning-4063                    2s          Normal    Created                      pod/csi-hostpath-provisioner-0                              Created container csi-provisioner\nprovisioning-4063                    1s          Normal    Started                      pod/csi-hostpath-provisioner-0                              Started container csi-provisioner\nprovisioning-4063                    5s          Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nprovisioning-4063                    2s          Normal    Pulled                       pod/csi-hostpath-resizer-0                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nprovisioning-4063                    1s          Normal    Created                      pod/csi-hostpath-resizer-0                                  Created container csi-resizer\nprovisioning-4063                    0s          Normal    Started                      pod/csi-hostpath-resizer-0                                  Started container csi-resizer\nprovisioning-4063                    5s          Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nprovisioning-4063                    3s          Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nprovisioning-4063                    2s          Normal    Created                      pod/csi-hostpathplugin-0                                    Created container node-driver-registrar\nprovisioning-4063                    1s          Normal    Started                      pod/csi-hostpathplugin-0                                    Started container node-driver-registrar\nprovisioning-4063                    1s          Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nprovisioning-4063                    1s          Normal    Created                      pod/csi-hostpathplugin-0                                    Created container hostpath\nprovisioning-4063                    0s          Normal    Started                      pod/csi-hostpathplugin-0                                    Started container hostpath\nprovisioning-4063                    0s          Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nprovisioning-4063                    5s          Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-4063                    5s          Normal    ExternalProvisioning         persistentvolumeclaim/csi-hostpathtg256                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-provisioning-4063\" or manually created by system administrator\nprovisioning-4063                    2s          Normal    Pulled                       pod/csi-snapshotter-0                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nprovisioning-4063                    2s          Normal    Created                      pod/csi-snapshotter-0                                       Created container csi-snapshotter\nprovisioning-4063                    0s          Normal    Started                      pod/csi-snapshotter-0                                       Started container csi-snapshotter\nprovisioning-4063                    5s          Normal    SuccessfulCreate             statefulset/csi-snapshotter                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nproxy-8892                           4s          Normal    Scheduled                    pod/proxy-service-chhwb-5p8dz                               Successfully assigned proxy-8892/proxy-service-chhwb-5p8dz to kind-worker\nproxy-8892                           0s          Normal    Pulled                       pod/proxy-service-chhwb-5p8dz                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nproxy-8892                           0s          Normal    Created                      pod/proxy-service-chhwb-5p8dz                               Created container proxy-service-chhwb\nproxy-8892                           5s          Normal    SuccessfulCreate             replicationcontroller/proxy-service-chhwb                   Created pod: proxy-service-chhwb-5p8dz\nsecurity-context-test-7441           7s          Normal    Scheduled                    pod/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d     Successfully assigned security-context-test-7441/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d to kind-worker\nsecurity-context-test-7441           6s          Normal    Pulled                       pod/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d     Container image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\" already present on machine\nsecurity-context-test-7441           6s          Normal    Created                      pod/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d     Created container alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d\nsecurity-context-test-7441           5s          Normal    Started                      pod/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d     Started container alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d\nservices-2366                        27s         Normal    Scheduled                    pod/pod1                                                    Successfully assigned services-2366/pod1 to kind-worker\nservices-2366                        24s         Normal    Pulled                       pod/pod1                                                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-2366                        24s         Normal    Created                      pod/pod1                                                    Created container pause\nservices-2366                        23s         Normal    Started                      pod/pod1                                                    Started container pause\nservices-2366                        12s         Normal    Scheduled                    pod/pod2                                                    Successfully assigned services-2366/pod2 to kind-worker2\nservices-2366                        10s         Normal    Pulled                       pod/pod2                                                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-2366                        10s         Normal    Created                      pod/pod2                                                    Created container pause\nservices-2366                        9s          Normal    Started                      pod/pod2                                                    Started container pause\nservices-8847                        1s          Normal    Scheduled                    pod/hostexec                                                Successfully assigned services-8847/hostexec to kind-worker\nstatefulset-1314                     26s         Normal    WaitForFirstConsumer         persistentvolumeclaim/datadir-ss-0                          waiting for first consumer to be created before binding\nstatefulset-1314                     26s         Normal    ExternalProvisioning         persistentvolumeclaim/datadir-ss-0                          waiting for a volume to be created, either by external provisioner \"rancher.io/local-path\" or manually created by system administrator\nstatefulset-1314                     26s         Normal    Provisioning                 persistentvolumeclaim/datadir-ss-0                          External provisioner is provisioning volume for claim \"statefulset-1314/datadir-ss-0\"\nstatefulset-1314                     15s         Normal    ProvisioningSucceeded        persistentvolumeclaim/datadir-ss-0                          Successfully provisioned volume pvc-2a5ae533-7ba5-4a0c-a04a-5e5484bf85bb\nstatefulset-1314                     26s         Warning   FailedScheduling             pod/ss-0                                                    persistentvolumeclaim \"datadir-ss-0\" not found\nstatefulset-1314                     14s         Normal    Scheduled                    pod/ss-0                                                    Successfully assigned statefulset-1314/ss-0 to kind-worker\nstatefulset-1314                     12s         Normal    Pulling                      pod/ss-0                                                    Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1314                     26s         Normal    SuccessfulCreate             statefulset/ss                                              create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\nstatefulset-1314                     26s         Normal    SuccessfulCreate             statefulset/ss                                              create Pod ss-0 in StatefulSet ss successful\nstatefulset-742                      11s         Normal    WaitForFirstConsumer         persistentvolumeclaim/datadir-ss-0                          waiting for first consumer to be created before binding\nstatefulset-742                      11s         Normal    ExternalProvisioning         persistentvolumeclaim/datadir-ss-0                          waiting for a volume to be created, either by external provisioner \"rancher.io/local-path\" or manually created by system administrator\nstatefulset-742                      11s         Normal    Provisioning                 persistentvolumeclaim/datadir-ss-0                          External provisioner is provisioning volume for claim \"statefulset-742/datadir-ss-0\"\nstatefulset-742                      2s          Normal    ProvisioningSucceeded        persistentvolumeclaim/datadir-ss-0                          Successfully provisioned volume pvc-cb3c702b-655f-4cd1-b586-5b359f48624d\nstatefulset-742                      1s          Normal    Scheduled                    pod/ss-0                                                    Successfully assigned statefulset-742/ss-0 to kind-worker2\nstatefulset-742                      11s         Normal    SuccessfulCreate             statefulset/ss                                              create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\nstatefulset-742                      11s         Normal    SuccessfulCreate             statefulset/ss                                              create Pod ss-0 in StatefulSet ss successful\nvolume-174                           97s         Normal    Pulled                       pod/hostexec-kind-worker2-j656h                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nvolume-174                           97s         Normal    Created                      pod/hostexec-kind-worker2-j656h                             Created container agnhost\nvolume-174                           97s         Normal    Started                      pod/hostexec-kind-worker2-j656h                             Started container agnhost\nvolume-174                           4s          Normal    Killing                      pod/hostexec-kind-worker2-j656h                             Stopping container agnhost\nvolume-174                           35s         Normal    Pulled                       pod/local-client                                            Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-174                           34s         Normal    Created                      pod/local-client                                            Created container local-client\nvolume-174                           34s         Normal    Started                      pod/local-client                                            Started container local-client\nvolume-174                           23s         Normal    Killing                      pod/local-client                                            Stopping container local-client\nvolume-174                           67s         Normal    Pulled                       pod/local-injector                                          Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-174                           67s         Normal    Created                      pod/local-injector                                          Created container local-injector\nvolume-174                           67s         Normal    Started                      pod/local-injector                                          Started container local-injector\nvolume-174                           52s         Normal    Killing                      pod/local-injector                                          Stopping container local-injector\nvolume-174                           87s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-7wrbq                             storageclass.storage.k8s.io \"volume-174\" not found\nvolume-6476                          99s         Normal    Pulled                       pod/hostpath-symlink-prep-volume-6476                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6476                          98s         Normal    Created                      pod/hostpath-symlink-prep-volume-6476                       Created container init-volume-volume-6476\nvolume-6476                          98s         Normal    Started                      pod/hostpath-symlink-prep-volume-6476                       Started container init-volume-volume-6476\nvolume-6476                          34s         Normal    Pulled                       pod/hostpathsymlink-client                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6476                          34s         Normal    Created                      pod/hostpathsymlink-client                                  Created container hostpathsymlink-client\nvolume-6476                          34s         Normal    Started                      pod/hostpathsymlink-client                                  Started container hostpathsymlink-client\nvolume-6476                          15s         Normal    Killing                      pod/hostpathsymlink-client                                  Stopping container hostpathsymlink-client\nvolume-6476                          79s         Normal    Pulled                       pod/hostpathsymlink-injector                                Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6476                          79s         Normal    Created                      pod/hostpathsymlink-injector                                Created container hostpathsymlink-injector\nvolume-6476                          79s         Normal    Started                      pod/hostpathsymlink-injector                                Started container hostpathsymlink-injector\nvolume-6476                          62s         Normal    Killing                      pod/hostpathsymlink-injector                                Stopping container hostpathsymlink-injector\n"
Jan 14 21:31:12.139: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config get services --all-namespaces'
Jan 14 21:31:12.406: INFO: stderr: ""
Jan 14 21:31:12.406: INFO: stdout: "NAMESPACE           NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE\ndefault             kubernetes                 ClusterIP   10.96.0.1        <none>        443/TCP                         3m46s\ndns-9442            dns-test-service           ClusterIP   None             <none>        80/TCP                          69s\ndns-9442            test-service-2             ClusterIP   10.110.18.188    <none>        80/TCP                          69s\nephemeral-3610      csi-hostpath-attacher      ClusterIP   10.101.112.143   <none>        12345/TCP                       44s\nephemeral-3610      csi-hostpath-provisioner   ClusterIP   10.109.40.109    <none>        12345/TCP                       44s\nephemeral-3610      csi-hostpath-resizer       ClusterIP   10.105.216.32    <none>        12345/TCP                       44s\nephemeral-3610      csi-hostpathplugin         ClusterIP   10.111.81.17     <none>        12345/TCP                       44s\nephemeral-3610      csi-snapshotter            ClusterIP   10.103.158.50    <none>        12345/TCP                       44s\nephemeral-4014      csi-hostpath-attacher      ClusterIP   10.104.248.54    <none>        12345/TCP                       86s\nephemeral-4014      csi-hostpath-provisioner   ClusterIP   10.109.187.126   <none>        12345/TCP                       86s\nephemeral-4014      csi-hostpath-resizer       ClusterIP   10.101.242.219   <none>        12345/TCP                       86s\nephemeral-4014      csi-hostpathplugin         ClusterIP   10.105.250.203   <none>        12345/TCP                       86s\nephemeral-4014      csi-snapshotter            ClusterIP   10.96.104.68     <none>        12345/TCP                       86s\nkube-system         kube-dns                   ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP          3m44s\nkubectl-8260        service1mjn46qfpsp         ClusterIP   10.100.57.210    <none>        10000/TCP                       0s\nnettest-9734        node-port-service          NodePort    10.104.219.120   <none>        80:30022/TCP,90:32490/UDP       9s\nnettest-9734        session-affinity-service   NodePort    10.101.150.241   <none>        80:31830/TCP,90:31510/UDP       9s\nprovisioning-4063   csi-hostpath-attacher      ClusterIP   10.107.49.148    <none>        12345/TCP                       7s\nprovisioning-4063   csi-hostpath-provisioner   ClusterIP   10.111.71.83     <none>        12345/TCP                       6s\nprovisioning-4063   csi-hostpath-resizer       ClusterIP   10.96.92.63      <none>        12345/TCP                       6s\nprovisioning-4063   csi-hostpathplugin         ClusterIP   10.97.239.119    <none>        12345/TCP                       7s\nprovisioning-4063   csi-snapshotter            ClusterIP   10.109.213.101   <none>        12345/TCP                       6s\nproxy-8892          proxy-service-chhwb        ClusterIP   10.96.74.233     <none>        80/TCP,81/TCP,443/TCP,444/TCP   6s\nservices-2366       endpoint-test2             ClusterIP   10.99.160.73     <none>        80/TCP                          29s\nservices-9440       multi-endpoint-test        ClusterIP   10.96.192.12     <none>        80/TCP,81/TCP                   1s\nstatefulset-1314    test                       ClusterIP   None             <none>        80/TCP                          27s\nstatefulset-742     test                       ClusterIP   None             <none>        80/TCP                          12s\n"
Jan 14 21:31:12.557: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config get configmaps --all-namespaces'
Jan 14 21:31:12.794: INFO: stderr: ""
Jan 14 21:31:12.794: INFO: stdout: "NAMESPACE            NAME                                                         DATA   AGE\nconfigmap-4054       configmap-test-volume-5df7fbad-b95a-46cc-9a1c-f55dcf1388d9   3      8s\nkube-public          cluster-info                                                 2      3m44s\nkube-system          coredns                                                      1      3m44s\nkube-system          extension-apiserver-authentication                           6      3m47s\nkube-system          kube-proxy                                                   2      3m44s\nkube-system          kubeadm-config                                               2      3m45s\nkube-system          kubelet-config-1.18                                          1      3m45s\nkubectl-8260         cm1mjn46qfpsp                                                1      0s\nlocal-path-storage   local-path-config                                            1      3m40s\n"
... skipping 17 lines ...
Jan 14 21:31:14.583: INFO: stdout: "NAMESPACE               NAME                       READY   AGE\ncsi-mock-volumes-5275   csi-mockplugin             0/1     2s\ncsi-mock-volumes-5275   csi-mockplugin-attacher    0/1     2s\ncsi-mock-volumes-5275   csi-mockplugin-resizer     0/1     2s\ncsi-mock-volumes-6885   csi-mockplugin             1/1     63s\ncsi-mock-volumes-6885   csi-mockplugin-attacher    1/1     63s\ncsi-mock-volumes-7850   csi-mockplugin             1/1     18s\ncsi-mock-volumes-7850   csi-mockplugin-attacher    1/1     18s\ncsi-mock-volumes-8526   csi-mockplugin             1/1     52s\ncsi-mock-volumes-8526   csi-mockplugin-attacher    1/1     52s\ncsi-mock-volumes-8822   csi-mockplugin             1/1     77s\ncsi-mock-volumes-8822   csi-mockplugin-attacher    1/1     77s\nephemeral-3610          csi-hostpath-attacher      1/1     46s\nephemeral-3610          csi-hostpath-provisioner   1/1     46s\nephemeral-3610          csi-hostpath-resizer       1/1     46s\nephemeral-3610          csi-hostpathplugin         1/1     46s\nephemeral-3610          csi-snapshotter            1/1     46s\nephemeral-4014          csi-hostpath-attacher      1/1     88s\nephemeral-4014          csi-hostpath-provisioner   1/1     88s\nephemeral-4014          csi-hostpath-resizer       1/1     88s\nephemeral-4014          csi-hostpathplugin         1/1     88s\nephemeral-4014          csi-snapshotter            1/1     88s\nkubectl-8260            ss3mjn46qfpsp              0/1     0s\nprovisioning-4063       csi-hostpath-attacher      0/1     9s\nprovisioning-4063       csi-hostpath-provisioner   0/1     8s\nprovisioning-4063       csi-hostpath-resizer       0/1     8s\nprovisioning-4063       csi-hostpathplugin         0/1     9s\nprovisioning-4063       csi-snapshotter            0/1     8s\nstatefulset-1314        ss                         0/1     29s\nstatefulset-742         ss                         0/2     14s\n"
Jan 14 21:31:14.675: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config get controllerrevisions --all-namespaces'
Jan 14 21:31:14.889: INFO: stderr: ""
Jan 14 21:31:14.889: INFO: stdout: "NAMESPACE               NAME                                  CONTROLLER                                  REVISION   AGE\ncsi-mock-volumes-5275   csi-mockplugin-5445859894             statefulset.apps/csi-mockplugin             1          2s\ncsi-mock-volumes-5275   csi-mockplugin-attacher-5cf7c5d885    statefulset.apps/csi-mockplugin-attacher    1          2s\ncsi-mock-volumes-5275   csi-mockplugin-resizer-665696d444     statefulset.apps/csi-mockplugin-resizer     1          2s\ncsi-mock-volumes-6885   csi-mockplugin-96d85cd84              statefulset.apps/csi-mockplugin             1          63s\ncsi-mock-volumes-6885   csi-mockplugin-attacher-845bcb7fb9    statefulset.apps/csi-mockplugin-attacher    1          63s\ncsi-mock-volumes-7850   csi-mockplugin-74fb7954b4             statefulset.apps/csi-mockplugin             1          18s\ncsi-mock-volumes-7850   csi-mockplugin-attacher-687d588c9d    statefulset.apps/csi-mockplugin-attacher    1          18s\ncsi-mock-volumes-8526   csi-mockplugin-9d9596974              statefulset.apps/csi-mockplugin             1          52s\ncsi-mock-volumes-8526   csi-mockplugin-attacher-77499d566f    statefulset.apps/csi-mockplugin-attacher    1          52s\ncsi-mock-volumes-8822   csi-mockplugin-attacher-5696695c84    statefulset.apps/csi-mockplugin-attacher    1          77s\ncsi-mock-volumes-8822   csi-mockplugin-d87d875dd              statefulset.apps/csi-mockplugin             1          77s\nephemeral-3610          csi-hostpath-attacher-7947459c78      statefulset.apps/csi-hostpath-attacher      1          46s\nephemeral-3610          csi-hostpath-provisioner-7cd45ccb9    statefulset.apps/csi-hostpath-provisioner   1          46s\nephemeral-3610          csi-hostpath-resizer-85578db789       statefulset.apps/csi-hostpath-resizer       1          46s\nephemeral-3610          csi-hostpathplugin-7f649fc8c9         statefulset.apps/csi-hostpathplugin         1          46s\nephemeral-3610          csi-snapshotter-848c6df94             statefulset.apps/csi-snapshotter            1          46s\nephemeral-4014          csi-hostpath-attacher-5dc6f45c9f      statefulset.apps/csi-hostpath-attacher      1          88s\nephemeral-4014          csi-hostpath-provisioner-8454cc7687   statefulset.apps/csi-hostpath-provisioner   1          88s\nephemeral-4014          csi-hostpath-resizer-76bfbbf9fd       statefulset.apps/csi-hostpath-resizer       1          88s\nephemeral-4014          csi-hostpathplugin-5d8f86f544         statefulset.apps/csi-hostpathplugin         1          88s\nephemeral-4014          csi-snapshotter-5bb7b5cd99            statefulset.apps/csi-snapshotter            1          88s\nkube-system             kindnet-5b955bbc76                    daemonset.apps/kindnet                      1          3m31s\nkube-system             kube-proxy-77b478d68                  daemonset.apps/kube-proxy                   1          3m31s\nkubectl-8260            crs3mjn46qfpsp                        <none>                                      0          0s\nprovisioning-4063       csi-hostpath-attacher-55bb756db7      statefulset.apps/csi-hostpath-attacher      1          9s\nprovisioning-4063       csi-hostpath-provisioner-56fc465fd5   statefulset.apps/csi-hostpath-provisioner   1          8s\nprovisioning-4063       csi-hostpath-resizer-fcc99c998        statefulset.apps/csi-hostpath-resizer       1          8s\nprovisioning-4063       csi-hostpathplugin-b9f44779d          statefulset.apps/csi-hostpathplugin         1          9s\nprovisioning-4063       csi-snapshotter-6df96dfd67            statefulset.apps/csi-snapshotter            1          8s\nstatefulset-1314        ss-6ddf64cb75                         statefulset.apps/ss                         1          29s\nstatefulset-742         ss-6ddf64cb75                         statefulset.apps/ss                         1          14s\n"
Jan 14 21:31:14.985: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config get events --all-namespaces'
Jan 14 21:31:15.418: INFO: stderr: ""
Jan 14 21:31:15.419: INFO: stdout: "NAMESPACE                            LAST SEEN   TYPE      REASON                       OBJECT                                                      MESSAGE\nconfigmap-4054                       11s         Normal    Scheduled                    pod/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580     Successfully assigned configmap-4054/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580 to kind-worker\nconfigmap-4054                       8s          Normal    Pulled                       pod/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580     Container image \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\" already present on machine\nconfigmap-4054                       8s          Normal    Created                      pod/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580     Created container configmap-volume-test\nconfigmap-4054                       7s          Normal    Started                      pod/pod-configmaps-d1fd0c22-79a0-41c6-b67c-0727e7df2580     Started container configmap-volume-test\ncronjob-9015                         6s          Normal    Scheduled                    pod/forbid-1579037460-pjdnc                                 Successfully assigned cronjob-9015/forbid-1579037460-pjdnc to kind-worker\ncronjob-9015                         2s          Normal    Pulled                       pod/forbid-1579037460-pjdnc                                 Container image \"docker.io/library/busybox:1.29\" already present on machine\ncronjob-9015                         2s          Warning   Failed                       pod/forbid-1579037460-pjdnc                                 Error: cannot find volume \"data\" to mount into container \"c\"\ncronjob-9015                         6s          Normal    SuccessfulCreate             job/forbid-1579037460                                       Created pod: forbid-1579037460-pjdnc\ncronjob-9015                         6s          Normal    SuccessfulCreate             cronjob/forbid                                              Created job forbid-1579037460\ncsi-mock-volumes-5275                3s          Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-5275                0s          Normal    Pulled                       pod/csi-mockplugin-resizer-0                                Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\ncsi-mock-volumes-5275                3s          Normal    SuccessfulCreate             statefulset/csi-mockplugin-resizer                          create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-5275                3s          Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-5275                2s          Normal    ExternalProvisioning         persistentvolumeclaim/pvc-vksrv                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-5275\" or manually created by system administrator\ncsi-mock-volumes-6885                62s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-6885                62s         Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-6885                62s         Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-6885                62s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-6885                62s         Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-6885                62s         Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-6885                62s         Normal    Pulling                      pod/csi-mockplugin-0                                        Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-6885                53s         Normal    Pulled                       pod/csi-mockplugin-0                                        Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-6885                53s         Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-6885                53s         Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-6885                63s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                               Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-6885                62s         Normal    Created                      pod/csi-mockplugin-attacher-0                               Created container csi-attacher\ncsi-mock-volumes-6885                62s         Normal    Started                      pod/csi-mockplugin-attacher-0                               Started container csi-attacher\ncsi-mock-volumes-6885                64s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-6885                64s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-6885                62s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-flqll                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-6885\" or manually created by system administrator\ncsi-mock-volumes-6885                52s         Normal    Provisioning                 persistentvolumeclaim/pvc-flqll                             External provisioner is provisioning volume for claim \"csi-mock-volumes-6885/pvc-flqll\"\ncsi-mock-volumes-6885                52s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-flqll                             Successfully provisioned volume pvc-00582b0e-ef2d-4613-8059-48f011f8578f\ncsi-mock-volumes-6885                51s         Warning   FailedAttachVolume           pod/pvc-volume-tester-2w7vb                                 AttachVolume.Attach failed for volume \"pvc-00582b0e-ef2d-4613-8059-48f011f8578f\" : cannot find NodeID for driver \"csi-mock-csi-mock-volumes-6885\" for node \"kind-worker\"\ncsi-mock-volumes-6885                50s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-2w7vb                                 AttachVolume.Attach succeeded for volume \"pvc-00582b0e-ef2d-4613-8059-48f011f8578f\"\ncsi-mock-volumes-6885                40s         Normal    Pulled                       pod/pvc-volume-tester-2w7vb                                 Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-6885                40s         Normal    Created                      pod/pvc-volume-tester-2w7vb                                 Created container volume-tester\ncsi-mock-volumes-6885                40s         Normal    Started                      pod/pvc-volume-tester-2w7vb                                 Started container volume-tester\ncsi-mock-volumes-6885                24s         Normal    Killing                      pod/pvc-volume-tester-2w7vb                                 Stopping container volume-tester\ncsi-mock-volumes-7542                2m22s       Normal    Pulling                      pod/csi-mockplugin-0                                        Pulling image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-7542                2m16s       Normal    Pulled                       pod/csi-mockplugin-0                                        Successfully pulled image \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\ncsi-mock-volumes-7542                2m16s       Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-7542                2m16s       Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-7542                2m16s       Normal    Pulling                      pod/csi-mockplugin-0                                        Pulling image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-7542                2m9s        Normal    Pulled                       pod/csi-mockplugin-0                                        Successfully pulled image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\ncsi-mock-volumes-7542                2m8s        Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-7542                2m8s        Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-7542                2m8s        Normal    Pulling                      pod/csi-mockplugin-0                                        Pulling image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-7542                116s        Normal    Pulled                       pod/csi-mockplugin-0                                        Successfully pulled image \"quay.io/k8scsi/mock-driver:v2.1.0\"\ncsi-mock-volumes-7542                116s        Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-7542                115s        Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-7542                6s          Normal    Killing                      pod/csi-mockplugin-0                                        Stopping container mock\ncsi-mock-volumes-7542                6s          Normal    Killing                      pod/csi-mockplugin-0                                        Stopping container driver-registrar\ncsi-mock-volumes-7542                6s          Normal    Killing                      pod/csi-mockplugin-0                                        Stopping container csi-provisioner\ncsi-mock-volumes-7542                2m22s       Normal    Pulling                      pod/csi-mockplugin-resizer-0                                Pulling image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-7542                2m13s       Normal    Pulled                       pod/csi-mockplugin-resizer-0                                Successfully pulled image \"quay.io/k8scsi/csi-resizer:v0.4.0\"\ncsi-mock-volumes-7542                2m12s       Normal    Created                      pod/csi-mockplugin-resizer-0                                Created container csi-resizer\ncsi-mock-volumes-7542                2m12s       Normal    Started                      pod/csi-mockplugin-resizer-0                                Started container csi-resizer\ncsi-mock-volumes-7542                2m24s       Normal    SuccessfulCreate             statefulset/csi-mockplugin-resizer                          create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\ncsi-mock-volumes-7542                2m24s       Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-7542                2m2s        Normal    ExternalProvisioning         persistentvolumeclaim/pvc-pfg98                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-7542\" or manually created by system administrator\ncsi-mock-volumes-7542                115s        Normal    Provisioning                 persistentvolumeclaim/pvc-pfg98                             External provisioner is provisioning volume for claim \"csi-mock-volumes-7542/pvc-pfg98\"\ncsi-mock-volumes-7542                115s        Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-pfg98                             Successfully provisioned volume pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\ncsi-mock-volumes-7542                105s        Warning   ExternalExpanding            persistentvolumeclaim/pvc-pfg98                             Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-7542                105s        Normal    Resizing                     persistentvolumeclaim/pvc-pfg98                             External resizer is resizing volume pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\ncsi-mock-volumes-7542                104s        Normal    FileSystemResizeRequired     persistentvolumeclaim/pvc-pfg98                             Require file system resize of volume on node\ncsi-mock-volumes-7542                25s         Normal    FileSystemResizeSuccessful   persistentvolumeclaim/pvc-pfg98                             MountVolume.NodeExpandVolume succeeded for volume \"pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\"\ncsi-mock-volumes-7542                111s        Normal    Pulled                       pod/pvc-volume-tester-gks4f                                 Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-7542                111s        Normal    Created                      pod/pvc-volume-tester-gks4f                                 Created container volume-tester\ncsi-mock-volumes-7542                111s        Normal    Started                      pod/pvc-volume-tester-gks4f                                 Started container volume-tester\ncsi-mock-volumes-7542                25s         Normal    FileSystemResizeSuccessful   pod/pvc-volume-tester-gks4f                                 MountVolume.NodeExpandVolume succeeded for volume \"pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\"\ncsi-mock-volumes-7542                24s         Normal    Killing                      pod/pvc-volume-tester-gks4f                                 Stopping container volume-tester\ncsi-mock-volumes-7850                18s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-7850                17s         Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-7850                17s         Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-7850                17s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-7850                17s         Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-7850                17s         Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-7850                17s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-7850                17s         Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-7850                17s         Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-7850                17s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                               Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-7850                17s         Normal    Created                      pod/csi-mockplugin-attacher-0                               Created container csi-attacher\ncsi-mock-volumes-7850                17s         Normal    Started                      pod/csi-mockplugin-attacher-0                               Started container csi-attacher\ncsi-mock-volumes-7850                19s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-7850                19s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-7850                17s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-pnvs8                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-7850\" or manually created by system administrator\ncsi-mock-volumes-7850                16s         Normal    Provisioning                 persistentvolumeclaim/pvc-pnvs8                             External provisioner is provisioning volume for claim \"csi-mock-volumes-7850/pvc-pnvs8\"\ncsi-mock-volumes-7850                16s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-pnvs8                             Successfully provisioned volume pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7\ncsi-mock-volumes-7850                14s         Warning   FailedAttachVolume           pod/pvc-volume-tester-98vdt                                 AttachVolume.Attach failed for volume \"pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7\" : cannot find NodeID for driver \"csi-mock-csi-mock-volumes-7850\" for node \"kind-worker2\"\ncsi-mock-volumes-7850                13s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-98vdt                                 AttachVolume.Attach succeeded for volume \"pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7\"\ncsi-mock-volumes-8526                51s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-8526                51s         Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-8526                51s         Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-8526                51s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-8526                51s         Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-8526                50s         Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-8526                50s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-8526                50s         Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-8526                50s         Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-8526                51s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                               Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-8526                51s         Normal    Created                      pod/csi-mockplugin-attacher-0                               Created container csi-attacher\ncsi-mock-volumes-8526                51s         Normal    Started                      pod/csi-mockplugin-attacher-0                               Started container csi-attacher\ncsi-mock-volumes-8526                53s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-8526                53s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-8526                53s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-mvk59                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-8526\" or manually created by system administrator\ncsi-mock-volumes-8526                50s         Normal    Provisioning                 persistentvolumeclaim/pvc-mvk59                             External provisioner is provisioning volume for claim \"csi-mock-volumes-8526/pvc-mvk59\"\ncsi-mock-volumes-8526                50s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-mvk59                             Successfully provisioned volume pvc-09d25c8b-fd4a-4dde-a94a-a87c21cc9d1e\ncsi-mock-volumes-8526                48s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-rbwgj                                 AttachVolume.Attach succeeded for volume \"pvc-09d25c8b-fd4a-4dde-a94a-a87c21cc9d1e\"\ncsi-mock-volumes-8526                37s         Normal    Pulled                       pod/pvc-volume-tester-rbwgj                                 Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-8526                37s         Normal    Created                      pod/pvc-volume-tester-rbwgj                                 Created container volume-tester\ncsi-mock-volumes-8526                36s         Normal    Started                      pod/pvc-volume-tester-rbwgj                                 Started container volume-tester\ncsi-mock-volumes-8526                24s         Normal    Killing                      pod/pvc-volume-tester-rbwgj                                 Stopping container volume-tester\ncsi-mock-volumes-8822                75s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\ncsi-mock-volumes-8822                75s         Normal    Created                      pod/csi-mockplugin-0                                        Created container csi-provisioner\ncsi-mock-volumes-8822                75s         Normal    Started                      pod/csi-mockplugin-0                                        Started container csi-provisioner\ncsi-mock-volumes-8822                75s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\ncsi-mock-volumes-8822                74s         Normal    Created                      pod/csi-mockplugin-0                                        Created container driver-registrar\ncsi-mock-volumes-8822                74s         Normal    Started                      pod/csi-mockplugin-0                                        Started container driver-registrar\ncsi-mock-volumes-8822                74s         Normal    Pulled                       pod/csi-mockplugin-0                                        Container image \"quay.io/k8scsi/mock-driver:v2.1.0\" already present on machine\ncsi-mock-volumes-8822                74s         Normal    Created                      pod/csi-mockplugin-0                                        Created container mock\ncsi-mock-volumes-8822                73s         Normal    Started                      pod/csi-mockplugin-0                                        Started container mock\ncsi-mock-volumes-8822                75s         Normal    Pulled                       pod/csi-mockplugin-attacher-0                               Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\ncsi-mock-volumes-8822                75s         Normal    Created                      pod/csi-mockplugin-attacher-0                               Created container csi-attacher\ncsi-mock-volumes-8822                74s         Normal    Started                      pod/csi-mockplugin-attacher-0                               Started container csi-attacher\ncsi-mock-volumes-8822                78s         Normal    SuccessfulCreate             statefulset/csi-mockplugin-attacher                         create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\ncsi-mock-volumes-8822                78s         Normal    SuccessfulCreate             statefulset/csi-mockplugin                                  create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\ncsi-mock-volumes-8822                76s         Normal    ExternalProvisioning         persistentvolumeclaim/pvc-bhf4k                             waiting for a volume to be created, either by external provisioner \"csi-mock-csi-mock-volumes-8822\" or manually created by system administrator\ncsi-mock-volumes-8822                72s         Normal    Provisioning                 persistentvolumeclaim/pvc-bhf4k                             External provisioner is provisioning volume for claim \"csi-mock-volumes-8822/pvc-bhf4k\"\ncsi-mock-volumes-8822                72s         Normal    ProvisioningSucceeded        persistentvolumeclaim/pvc-bhf4k                             Successfully provisioned volume pvc-17503600-c5e1-464b-86c7-33e9a8004ace\ncsi-mock-volumes-8822                32s         Warning   ExternalExpanding            persistentvolumeclaim/pvc-bhf4k                             Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\ncsi-mock-volumes-8822                71s         Warning   FailedAttachVolume           pod/pvc-volume-tester-ch6xl                                 AttachVolume.Attach failed for volume \"pvc-17503600-c5e1-464b-86c7-33e9a8004ace\" : cannot find NodeID for driver \"csi-mock-csi-mock-volumes-8822\" for node \"kind-worker2\"\ncsi-mock-volumes-8822                70s         Normal    SuccessfulAttachVolume       pod/pvc-volume-tester-ch6xl                                 AttachVolume.Attach succeeded for volume \"pvc-17503600-c5e1-464b-86c7-33e9a8004ace\"\ncsi-mock-volumes-8822                58s         Normal    Pulled                       pod/pvc-volume-tester-ch6xl                                 Container image \"k8s.gcr.io/pause:3.1\" already present on machine\ncsi-mock-volumes-8822                58s         Normal    Created                      pod/pvc-volume-tester-ch6xl                                 Created container volume-tester\ncsi-mock-volumes-8822                57s         Normal    Started                      pod/pvc-volume-tester-ch6xl                                 Started container volume-tester\ndefault                              3m47s       Normal    Starting                     node/kind-control-plane                                     Starting kubelet.\ndefault                              3m47s       Warning   CheckLimitsForResolvConf     node/kind-control-plane                                     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\ndefault                              3m47s       Normal    NodeHasSufficientMemory      node/kind-control-plane                                     Node kind-control-plane status is now: NodeHasSufficientMemory\ndefault                              3m47s       Normal    NodeHasNoDiskPressure        node/kind-control-plane                                     Node kind-control-plane status is now: NodeHasNoDiskPressure\ndefault                              3m47s       Normal    NodeHasSufficientPID         node/kind-control-plane                                     Node kind-control-plane status is now: NodeHasSufficientPID\ndefault                              3m47s       Normal    NodeAllocatableEnforced      node/kind-control-plane                                     Updated Node Allocatable limit across pods\ndefault                              3m32s       Normal    RegisteredNode               node/kind-control-plane                                     Node kind-control-plane event: Registered Node kind-control-plane in Controller\ndefault                              3m30s       Normal    Starting                     node/kind-control-plane                                     Starting kube-proxy.\ndefault                              3m17s       Normal    NodeReady                    node/kind-control-plane                                     Node kind-control-plane status is now: NodeReady\ndefault                              3m13s       Normal    NodeHasSufficientPID         node/kind-worker                                            Node kind-worker status is now: NodeHasSufficientPID\ndefault                              3m12s       Normal    RegisteredNode               node/kind-worker                                            Node kind-worker event: Registered Node kind-worker in Controller\ndefault                              3m9s        Normal    Starting                     node/kind-worker                                            Starting kube-proxy.\ndefault                              3m14s       Normal    NodeHasNoDiskPressure        node/kind-worker2                                           Node kind-worker2 status is now: NodeHasNoDiskPressure\ndefault                              3m12s       Normal    RegisteredNode               node/kind-worker2                                           Node kind-worker2 event: Registered Node kind-worker2 in Controller\ndefault                              3m9s        Normal    Starting                     node/kind-worker2                                           Starting kube-proxy.\ndns-9442                             72s         Normal    Scheduled                    pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Successfully assigned dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f to kind-worker\ndns-9442                             68s         Normal    Pulling                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Pulling image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ndns-9442                             67s         Normal    Pulled                       pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\ndns-9442                             67s         Normal    Created                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Created container webserver\ndns-9442                             67s         Normal    Started                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Started container webserver\ndns-9442                             66s         Normal    Pulling                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Pulling image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\ndns-9442                             64s         Normal    Pulled                       pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\ndns-9442                             64s         Normal    Created                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Created container querier\ndns-9442                             63s         Normal    Started                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Started container querier\ndns-9442                             63s         Normal    Pulling                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Pulling image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-9442                             56s         Normal    Pulled                       pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Successfully pulled image \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\ndns-9442                             55s         Normal    Created                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Created container jessie-querier\ndns-9442                             55s         Normal    Started                      pod/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f           Started container jessie-querier\nephemeral-3610                       46s         Normal    Pulled                       pod/csi-hostpath-attacher-0                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nephemeral-3610                       46s         Normal    Created                      pod/csi-hostpath-attacher-0                                 Created container csi-attacher\nephemeral-3610                       45s         Normal    Started                      pod/csi-hostpath-attacher-0                                 Started container csi-attacher\nephemeral-3610                       47s         Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-3610                       46s         Normal    Pulled                       pod/csi-hostpath-provisioner-0                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nephemeral-3610                       46s         Normal    Created                      pod/csi-hostpath-provisioner-0                              Created container csi-provisioner\nephemeral-3610                       45s         Normal    Started                      pod/csi-hostpath-provisioner-0                              Started container csi-provisioner\nephemeral-3610                       47s         Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-3610                       46s         Normal    Pulled                       pod/csi-hostpath-resizer-0                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nephemeral-3610                       46s         Normal    Created                      pod/csi-hostpath-resizer-0                                  Created container csi-resizer\nephemeral-3610                       45s         Normal    Started                      pod/csi-hostpath-resizer-0                                  Started container csi-resizer\nephemeral-3610                       47s         Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-3610                       46s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nephemeral-3610                       46s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container node-driver-registrar\nephemeral-3610                       45s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container node-driver-registrar\nephemeral-3610                       45s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nephemeral-3610                       45s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container hostpath\nephemeral-3610                       45s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container hostpath\nephemeral-3610                       45s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nephemeral-3610                       45s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container liveness-probe\nephemeral-3610                       44s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container liveness-probe\nephemeral-3610                       47s         Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-3610                       45s         Normal    Pulled                       pod/csi-snapshotter-0                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-3610                       45s         Normal    Created                      pod/csi-snapshotter-0                                       Created container csi-snapshotter\nephemeral-3610                       44s         Normal    Started                      pod/csi-snapshotter-0                                       Started container csi-snapshotter\nephemeral-3610                       47s         Normal    SuccessfulCreate             statefulset/csi-snapshotter                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-3610                       45s         Warning   FailedMount                  pod/inline-volume-tester-d6jf6                              MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-3610 not found in the list of registered CSI drivers\nephemeral-3610                       42s         Normal    Pulled                       pod/inline-volume-tester-d6jf6                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-3610                       42s         Normal    Created                      pod/inline-volume-tester-d6jf6                              Created container csi-volume-tester\nephemeral-3610                       42s         Normal    Started                      pod/inline-volume-tester-d6jf6                              Started container csi-volume-tester\nephemeral-3610                       31s         Normal    Killing                      pod/inline-volume-tester-d6jf6                              Stopping container csi-volume-tester\nephemeral-4014                       87s         Normal    Pulled                       pod/csi-hostpath-attacher-0                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nephemeral-4014                       87s         Normal    Created                      pod/csi-hostpath-attacher-0                                 Created container csi-attacher\nephemeral-4014                       86s         Normal    Started                      pod/csi-hostpath-attacher-0                                 Started container csi-attacher\nephemeral-4014                       89s         Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nephemeral-4014                       87s         Normal    Pulled                       pod/csi-hostpath-provisioner-0                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nephemeral-4014                       86s         Normal    Created                      pod/csi-hostpath-provisioner-0                              Created container csi-provisioner\nephemeral-4014                       86s         Normal    Started                      pod/csi-hostpath-provisioner-0                              Started container csi-provisioner\nephemeral-4014                       89s         Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nephemeral-4014                       86s         Normal    Pulled                       pod/csi-hostpath-resizer-0                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nephemeral-4014                       86s         Normal    Created                      pod/csi-hostpath-resizer-0                                  Created container csi-resizer\nephemeral-4014                       86s         Normal    Started                      pod/csi-hostpath-resizer-0                                  Started container csi-resizer\nephemeral-4014                       89s         Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nephemeral-4014                       87s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nephemeral-4014                       87s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container node-driver-registrar\nephemeral-4014                       86s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container node-driver-registrar\nephemeral-4014                       86s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nephemeral-4014                       86s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container hostpath\nephemeral-4014                       85s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container hostpath\nephemeral-4014                       85s         Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nephemeral-4014                       85s         Normal    Created                      pod/csi-hostpathplugin-0                                    Created container liveness-probe\nephemeral-4014                       84s         Normal    Started                      pod/csi-hostpathplugin-0                                    Started container liveness-probe\nephemeral-4014                       89s         Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nephemeral-4014                       86s         Normal    Pulled                       pod/csi-snapshotter-0                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nephemeral-4014                       86s         Normal    Created                      pod/csi-snapshotter-0                                       Created container csi-snapshotter\nephemeral-4014                       85s         Normal    Started                      pod/csi-snapshotter-0                                       Started container csi-snapshotter\nephemeral-4014                       89s         Normal    SuccessfulCreate             statefulset/csi-snapshotter                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nephemeral-4014                       87s         Warning   FailedMount                  pod/inline-volume-tester-vplmv                              MountVolume.SetUp failed for volume \"my-volume-0\" : kubernetes.io/csi: mounter.SetUpAt failed to get CSI client: driver name csi-hostpath-ephemeral-4014 not found in the list of registered CSI drivers\nephemeral-4014                       83s         Normal    Pulled                       pod/inline-volume-tester-vplmv                              Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-4014                       83s         Normal    Created                      pod/inline-volume-tester-vplmv                              Created container csi-volume-tester\nephemeral-4014                       83s         Normal    Started                      pod/inline-volume-tester-vplmv                              Started container csi-volume-tester\nephemeral-4014                       12s         Normal    Killing                      pod/inline-volume-tester-vplmv                              Stopping container csi-volume-tester\nephemeral-4014                       67s         Normal    Pulled                       pod/inline-volume-tester2-g5n6j                             Container image \"docker.io/library/busybox:1.29\" already present on machine\nephemeral-4014                       67s         Normal    Created                      pod/inline-volume-tester2-g5n6j                             Created container csi-volume-tester\nephemeral-4014                       66s         Normal    Started                      pod/inline-volume-tester2-g5n6j                             Started container csi-volume-tester\nephemeral-4014                       54s         Normal    Killing                      pod/inline-volume-tester2-g5n6j                             Stopping container csi-volume-tester\njob-3875                             25s         Normal    Scheduled                    pod/adopt-release-8j29c                                     Successfully assigned job-3875/adopt-release-8j29c to kind-worker\njob-3875                             23s         Normal    Pulled                       pod/adopt-release-8j29c                                     Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3875                             23s         Normal    Created                      pod/adopt-release-8j29c                                     Created container c\njob-3875                             23s         Normal    Started                      pod/adopt-release-8j29c                                     Started container c\njob-3875                             25s         Normal    Scheduled                    pod/adopt-release-g4qg2                                     Successfully assigned job-3875/adopt-release-g4qg2 to kind-worker\njob-3875                             23s         Normal    Pulled                       pod/adopt-release-g4qg2                                     Container image \"docker.io/library/busybox:1.29\" already present on machine\njob-3875                             23s         Normal    Created                      pod/adopt-release-g4qg2                                     Created container c\njob-3875                             23s         Normal    Started                      pod/adopt-release-g4qg2                                     Started container c\njob-3875                             6s          Normal    Scheduled                    pod/adopt-release-jg58t                                     Successfully assigned job-3875/adopt-release-jg58t to kind-worker\njob-3875                             25s         Normal    SuccessfulCreate             job/adopt-release                                           Created pod: adopt-release-8j29c\njob-3875                             25s         Normal    SuccessfulCreate             job/adopt-release                                           Created pod: adopt-release-g4qg2\njob-3875                             6s          Normal    SuccessfulCreate             job/adopt-release                                           Created pod: adopt-release-jg58t\nkube-system                          3m17s       Warning   FailedScheduling             pod/coredns-6955765f44-45qgf                                0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                          3m15s       Normal    Scheduled                    pod/coredns-6955765f44-45qgf                                Successfully assigned kube-system/coredns-6955765f44-45qgf to kind-control-plane\nkube-system                          3m14s       Normal    Pulled                       pod/coredns-6955765f44-45qgf                                Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                          3m13s       Normal    Created                      pod/coredns-6955765f44-45qgf                                Created container coredns\nkube-system                          3m13s       Normal    Started                      pod/coredns-6955765f44-45qgf                                Started container coredns\nkube-system                          3m17s       Warning   FailedScheduling             pod/coredns-6955765f44-blnrh                                0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nkube-system                          3m15s       Normal    Scheduled                    pod/coredns-6955765f44-blnrh                                Successfully assigned kube-system/coredns-6955765f44-blnrh to kind-control-plane\nkube-system                          3m14s       Normal    Pulled                       pod/coredns-6955765f44-blnrh                                Container image \"k8s.gcr.io/coredns:1.6.5\" already present on machine\nkube-system                          3m13s       Normal    Created                      pod/coredns-6955765f44-blnrh                                Created container coredns\nkube-system                          3m13s       Normal    Started                      pod/coredns-6955765f44-blnrh                                Started container coredns\nkube-system                          3m32s       Normal    SuccessfulCreate             replicaset/coredns-6955765f44                               Created pod: coredns-6955765f44-45qgf\nkube-system                          3m32s       Normal    SuccessfulCreate             replicaset/coredns-6955765f44                               Created pod: coredns-6955765f44-blnrh\nkube-system                          3m32s       Normal    ScalingReplicaSet            deployment/coredns                                          Scaled up replica set coredns-6955765f44 to 2\nkube-system                          3m32s       Normal    Scheduled                    pod/kindnet-2hf8t                                           Successfully assigned kube-system/kindnet-2hf8t to kind-control-plane\nkube-system                          3m30s       Normal    Pulled                       pod/kindnet-2hf8t                                           Container image \"kindest/kindnetd:0.5.4\" already present on machine\nkube-system                          3m29s       Normal    Created                      pod/kindnet-2hf8t                                           Created container kindnet-cni\nkube-system                          3m29s       Normal    Started                      pod/kindnet-2hf8t                                           Started container kindnet-cni\nkube-system                          3m13s       Normal    Scheduled                    pod/kindnet-6rhkp                                           Successfully assigned kube-system/kindnet-6rhkp to kind-worker\nkube-system                          3m12s       Normal    Pulled                       pod/kindnet-6rhkp                                           Container image \"kindest/kindnetd:0.5.4\" already present on machine\nkube-system                          3m10s       Normal    Created                      pod/kindnet-6rhkp                                           Created container kindnet-cni\nkube-system                          3m9s        Normal    Started                      pod/kindnet-6rhkp                                           Started container kindnet-cni\nkube-system                          3m14s       Normal    Scheduled                    pod/kindnet-jxzbl                                           Successfully assigned kube-system/kindnet-jxzbl to kind-worker2\nkube-system                          3m13s       Normal    Pulled                       pod/kindnet-jxzbl                                           Container image \"kindest/kindnetd:0.5.4\" already present on machine\nkube-system                          3m10s       Normal    Created                      pod/kindnet-jxzbl                                           Created container kindnet-cni\nkube-system                          3m9s        Normal    Started                      pod/kindnet-jxzbl                                           Started container kindnet-cni\nkube-system                          3m32s       Normal    SuccessfulCreate             daemonset/kindnet                                           Created pod: kindnet-2hf8t\nkube-system                          3m14s       Normal    SuccessfulCreate             daemonset/kindnet                                           Created pod: kindnet-jxzbl\nkube-system                          3m13s       Normal    SuccessfulCreate             daemonset/kindnet                                           Created pod: kindnet-6rhkp\nkube-system                          3m48s       Normal    LeaderElection               endpoints/kube-controller-manager                           kind-control-plane_271067d6-33cf-4930-9ba7-05996c920976 became leader\nkube-system                          3m48s       Normal    LeaderElection               lease/kube-controller-manager                               kind-control-plane_271067d6-33cf-4930-9ba7-05996c920976 became leader\nkube-system                          3m13s       Normal    Scheduled                    pod/kube-proxy-4md69                                        Successfully assigned kube-system/kube-proxy-4md69 to kind-worker\nkube-system                          3m13s       Normal    Pulled                       pod/kube-proxy-4md69                                        Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\" already present on machine\nkube-system                          3m10s       Normal    Created                      pod/kube-proxy-4md69                                        Created container kube-proxy\nkube-system                          3m10s       Normal    Started                      pod/kube-proxy-4md69                                        Started container kube-proxy\nkube-system                          3m32s       Normal    Scheduled                    pod/kube-proxy-rh967                                        Successfully assigned kube-system/kube-proxy-rh967 to kind-control-plane\nkube-system                          3m31s       Normal    Pulled                       pod/kube-proxy-rh967                                        Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\" already present on machine\nkube-system                          3m30s       Normal    Created                      pod/kube-proxy-rh967                                        Created container kube-proxy\nkube-system                          3m30s       Normal    Started                      pod/kube-proxy-rh967                                        Started container kube-proxy\nkube-system                          3m14s       Normal    Scheduled                    pod/kube-proxy-sllbk                                        Successfully assigned kube-system/kube-proxy-sllbk to kind-worker2\nkube-system                          3m13s       Normal    Pulled                       pod/kube-proxy-sllbk                                        Container image \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\" already present on machine\nkube-system                          3m10s       Normal    Created                      pod/kube-proxy-sllbk                                        Created container kube-proxy\nkube-system                          3m10s       Normal    Started                      pod/kube-proxy-sllbk                                        Started container kube-proxy\nkube-system                          3m32s       Normal    SuccessfulCreate             daemonset/kube-proxy                                        Created pod: kube-proxy-rh967\nkube-system                          3m14s       Normal    SuccessfulCreate             daemonset/kube-proxy                                        Created pod: kube-proxy-sllbk\nkube-system                          3m13s       Normal    SuccessfulCreate             daemonset/kube-proxy                                        Created pod: kube-proxy-4md69\nkube-system                          3m48s       Normal    LeaderElection               endpoints/kube-scheduler                                    kind-control-plane_c401d2bf-7ea3-46fd-a507-5d6babc3a00c became leader\nkube-system                          3m48s       Normal    LeaderElection               lease/kube-scheduler                                        kind-control-plane_c401d2bf-7ea3-46fd-a507-5d6babc3a00c became leader\nkubectl-8260                         2s          Normal    Scheduled                    pod/deployment4mjn46qfpsp-87fd78899-v5n6l                   Successfully assigned kubectl-8260/deployment4mjn46qfpsp-87fd78899-v5n6l to kind-worker\nkubectl-8260                         2s          Normal    SuccessfulCreate             replicaset/deployment4mjn46qfpsp-87fd78899                  Created pod: deployment4mjn46qfpsp-87fd78899-v5n6l\nkubectl-8260                         2s          Normal    ScalingReplicaSet            deployment/deployment4mjn46qfpsp                            Scaled up replica set deployment4mjn46qfpsp-87fd78899 to 1\nkubectl-8260                         1s          Normal    Scheduled                    pod/ds6mjn46qfpsp-9ctgc                                     Successfully assigned kubectl-8260/ds6mjn46qfpsp-9ctgc to kind-worker2\nkubectl-8260                         1s          Normal    Scheduled                    pod/ds6mjn46qfpsp-kplzs                                     Successfully assigned kubectl-8260/ds6mjn46qfpsp-kplzs to kind-worker\nkubectl-8260                         1s          Normal    SuccessfulCreate             daemonset/ds6mjn46qfpsp                                     Created pod: ds6mjn46qfpsp-9ctgc\nkubectl-8260                         1s          Normal    SuccessfulCreate             daemonset/ds6mjn46qfpsp                                     Created pod: ds6mjn46qfpsp-kplzs\nkubectl-8260                         <unknown>             Laziness                                                                                 some data here\nkubectl-8260                         8s          Warning   FailedScheduling             pod/pod1mjn46qfpsp                                          0/3 nodes are available: 3 Insufficient cpu.\nkubectl-8260                         7s          Warning   FailedScheduling             pod/pod1mjn46qfpsp                                          skip schedule deleting pod: kubectl-8260/pod1mjn46qfpsp\nkubectl-8260                         6s          Normal    WaitForFirstConsumer         persistentvolumeclaim/pvc1mjn46qfpsp                        waiting for first consumer to be created before binding\nkubectl-8260                         5s          Normal    Scheduled                    pod/rc1mjn46qfpsp-dm6s9                                     Successfully assigned kubectl-8260/rc1mjn46qfpsp-dm6s9 to kind-worker\nkubectl-8260                         2s          Normal    Pulling                      pod/rc1mjn46qfpsp-dm6s9                                     Pulling image \"fedora:latest\"\nkubectl-8260                         5s          Normal    SuccessfulCreate             replicationcontroller/rc1mjn46qfpsp                         Created pod: rc1mjn46qfpsp-dm6s9\nkubectl-8260                         2s          Normal    Scheduled                    pod/rs3mjn46qfpsp-thphb                                     Successfully assigned kubectl-8260/rs3mjn46qfpsp-thphb to kind-worker\nkubectl-8260                         2s          Normal    SuccessfulCreate             replicaset/rs3mjn46qfpsp                                    Created pod: rs3mjn46qfpsp-thphb\nkubectl-8260                         1s          Warning   FailedCreate                 statefulset/ss3mjn46qfpsp                                   create Pod ss3mjn46qfpsp-0 in StatefulSet ss3mjn46qfpsp failed error: Pod \"ss3mjn46qfpsp-0\" is invalid: spec.containers: Required value\nkubectl-8440                         4s          Normal    Scheduled                    pod/update-demo-nautilus-r76b7                              Successfully assigned kubectl-8440/update-demo-nautilus-r76b7 to kind-worker\nkubectl-8440                         1s          Normal    Pulled                       pod/update-demo-nautilus-r76b7                              Container image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\" already present on machine\nkubectl-8440                         4s          Normal    Scheduled                    pod/update-demo-nautilus-rl8tb                              Successfully assigned kubectl-8440/update-demo-nautilus-rl8tb to kind-worker2\nkubectl-8440                         1s          Normal    Pulled                       pod/update-demo-nautilus-rl8tb                              Container image \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\" already present on machine\nkubectl-8440                         1s          Normal    Created                      pod/update-demo-nautilus-rl8tb                              Created container update-demo\nkubectl-8440                         4s          Normal    SuccessfulCreate             replicationcontroller/update-demo-nautilus                  Created pod: update-demo-nautilus-rl8tb\nkubectl-8440                         4s          Normal    SuccessfulCreate             replicationcontroller/update-demo-nautilus                  Created pod: update-demo-nautilus-r76b7\nlocal-path-storage                   27s         Normal    Pulled                       pod/create-pvc-2a5ae533-7ba5-4a0c-a04a-5e5484bf85bb         Container image \"k8s.gcr.io/debian-base:v2.0.0\" already present on machine\nlocal-path-storage                   27s         Normal    Created                      pod/create-pvc-2a5ae533-7ba5-4a0c-a04a-5e5484bf85bb         Created container local-path-create\nlocal-path-storage                   26s         Normal    Started                      pod/create-pvc-2a5ae533-7ba5-4a0c-a04a-5e5484bf85bb         Started container local-path-create\nlocal-path-storage                   14s         Normal    Pulled                       pod/create-pvc-cb3c702b-655f-4cd1-b586-5b359f48624d         Container image \"k8s.gcr.io/debian-base:v2.0.0\" already present on machine\nlocal-path-storage                   14s         Normal    Created                      pod/create-pvc-cb3c702b-655f-4cd1-b586-5b359f48624d         Created container local-path-create\nlocal-path-storage                   14s         Normal    Started                      pod/create-pvc-cb3c702b-655f-4cd1-b586-5b359f48624d         Started container local-path-create\nlocal-path-storage                   3m17s       Warning   FailedScheduling             pod/local-path-provisioner-7745554f7f-9fxhw                 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\nlocal-path-storage                   3m15s       Normal    Scheduled                    pod/local-path-provisioner-7745554f7f-9fxhw                 Successfully assigned local-path-storage/local-path-provisioner-7745554f7f-9fxhw to kind-control-plane\nlocal-path-storage                   3m14s       Normal    Pulled                       pod/local-path-provisioner-7745554f7f-9fxhw                 Container image \"rancher/local-path-provisioner:v0.0.11\" already present on machine\nlocal-path-storage                   3m13s       Normal    Created                      pod/local-path-provisioner-7745554f7f-9fxhw                 Created container local-path-provisioner\nlocal-path-storage                   3m13s       Normal    Started                      pod/local-path-provisioner-7745554f7f-9fxhw                 Started container local-path-provisioner\nlocal-path-storage                   3m32s       Normal    SuccessfulCreate             replicaset/local-path-provisioner-7745554f7f                Created pod: local-path-provisioner-7745554f7f-9fxhw\nlocal-path-storage                   3m32s       Normal    ScalingReplicaSet            deployment/local-path-provisioner                           Scaled up replica set local-path-provisioner-7745554f7f to 1\nlocal-path-storage                   3m13s       Normal    LeaderElection               endpoints/rancher.io-local-path                             local-path-provisioner-7745554f7f-9fxhw_bea625e9-3714-11ea-bdbd-0e6ef275eabc became leader\nnettest-4461                         14s         Normal    Scheduled                    pod/netserver-0                                             Successfully assigned nettest-4461/netserver-0 to kind-worker\nnettest-4461                         12s         Normal    Pulled                       pod/netserver-0                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4461                         11s         Normal    Created                      pod/netserver-0                                             Created container webserver\nnettest-4461                         11s         Normal    Started                      pod/netserver-0                                             Started container webserver\nnettest-4461                         14s         Normal    Scheduled                    pod/netserver-1                                             Successfully assigned nettest-4461/netserver-1 to kind-worker2\nnettest-4461                         12s         Normal    Pulled                       pod/netserver-1                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-4461                         11s         Normal    Created                      pod/netserver-1                                             Created container webserver\nnettest-4461                         11s         Normal    Started                      pod/netserver-1                                             Started container webserver\nnettest-9734                         53s         Normal    Scheduled                    pod/netserver-0                                             Successfully assigned nettest-9734/netserver-0 to kind-worker\nnettest-9734                         51s         Normal    Pulled                       pod/netserver-0                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-9734                         51s         Normal    Created                      pod/netserver-0                                             Created container webserver\nnettest-9734                         51s         Normal    Started                      pod/netserver-0                                             Started container webserver\nnettest-9734                         53s         Normal    Scheduled                    pod/netserver-1                                             Successfully assigned nettest-9734/netserver-1 to kind-worker2\nnettest-9734                         51s         Normal    Pulled                       pod/netserver-1                                             Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-9734                         51s         Normal    Created                      pod/netserver-1                                             Created container webserver\nnettest-9734                         51s         Normal    Started                      pod/netserver-1                                             Started container webserver\nnettest-9734                         23s         Normal    Scheduled                    pod/test-container-pod                                      Successfully assigned nettest-9734/test-container-pod to kind-worker2\nnettest-9734                         22s         Normal    Pulled                       pod/test-container-pod                                      Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nnettest-9734                         22s         Normal    Created                      pod/test-container-pod                                      Created container webserver\nnettest-9734                         21s         Normal    Started                      pod/test-container-pod                                      Started container webserver\npersistent-local-volumes-test-2002   2s          Normal    Pulled                       pod/hostexec-kind-worker-6tbbr                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-2002   2s          Normal    Created                      pod/hostexec-kind-worker-6tbbr                              Created container agnhost\npersistent-local-volumes-test-2002   1s          Normal    Started                      pod/hostexec-kind-worker-6tbbr                              Started container agnhost\npersistent-local-volumes-test-6760   34s         Normal    Pulled                       pod/hostexec-kind-worker-dz7v4                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-6760   34s         Normal    Created                      pod/hostexec-kind-worker-dz7v4                              Created container agnhost\npersistent-local-volumes-test-6760   34s         Normal    Started                      pod/hostexec-kind-worker-dz7v4                              Started container agnhost\npersistent-local-volumes-test-6760   23s         Warning   ProvisioningFailed           persistentvolumeclaim/pvc-nw5n2                             no volume plugin matched\npersistent-local-volumes-test-6760   17s         Normal    Scheduled                    pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Successfully assigned persistent-local-volumes-test-6760/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9 to kind-worker\npersistent-local-volumes-test-6760   15s         Normal    Pulled                       pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Container image \"docker.io/library/busybox:1.29\" already present on machine\npersistent-local-volumes-test-6760   15s         Normal    Created                      pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Created container write-pod\npersistent-local-volumes-test-6760   14s         Normal    Started                      pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Started container write-pod\npersistent-local-volumes-test-6760   6s          Normal    Killing                      pod/security-context-6cd2d986-e61a-4766-bbd4-c2214e21d0e9   Stopping container write-pod\npersistent-local-volumes-test-7750   24s         Normal    Pulled                       pod/hostexec-kind-worker-j8r65                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-7750   24s         Normal    Created                      pod/hostexec-kind-worker-j8r65                              Created container agnhost\npersistent-local-volumes-test-7750   23s         Normal    Started                      pod/hostexec-kind-worker-j8r65                              Started container agnhost\npersistent-local-volumes-test-7782   3s          Normal    Pulled                       pod/hostexec-kind-worker-xhpmw                              Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\npersistent-local-volumes-test-7782   2s          Normal    Created                      pod/hostexec-kind-worker-xhpmw                              Created container agnhost\npersistent-local-volumes-test-7782   1s          Normal    Started                      pod/hostexec-kind-worker-xhpmw                              Started container agnhost\npod-network-test-1590                0s          Normal    Scheduled                    pod/netserver-0                                             Successfully assigned pod-network-test-1590/netserver-0 to kind-worker\npod-network-test-1590                0s          Normal    Scheduled                    pod/netserver-1                                             Successfully assigned pod-network-test-1590/netserver-1 to kind-worker2\nprovisioning-4063                    7s          Normal    Pulled                       pod/csi-hostpath-attacher-0                                 Container image \"quay.io/k8scsi/csi-attacher:v2.1.0\" already present on machine\nprovisioning-4063                    7s          Normal    Created                      pod/csi-hostpath-attacher-0                                 Created container csi-attacher\nprovisioning-4063                    6s          Normal    Started                      pod/csi-hostpath-attacher-0                                 Started container csi-attacher\nprovisioning-4063                    10s         Normal    SuccessfulCreate             statefulset/csi-hostpath-attacher                           create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\nprovisioning-4063                    6s          Normal    Pulled                       pod/csi-hostpath-provisioner-0                              Container image \"quay.io/k8scsi/csi-provisioner:v1.5.0\" already present on machine\nprovisioning-4063                    6s          Normal    Created                      pod/csi-hostpath-provisioner-0                              Created container csi-provisioner\nprovisioning-4063                    5s          Normal    Started                      pod/csi-hostpath-provisioner-0                              Started container csi-provisioner\nprovisioning-4063                    9s          Normal    SuccessfulCreate             statefulset/csi-hostpath-provisioner                        create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\nprovisioning-4063                    6s          Normal    Pulled                       pod/csi-hostpath-resizer-0                                  Container image \"quay.io/k8scsi/csi-resizer:v0.4.0\" already present on machine\nprovisioning-4063                    5s          Normal    Created                      pod/csi-hostpath-resizer-0                                  Created container csi-resizer\nprovisioning-4063                    4s          Normal    Started                      pod/csi-hostpath-resizer-0                                  Started container csi-resizer\nprovisioning-4063                    9s          Normal    SuccessfulCreate             statefulset/csi-hostpath-resizer                            create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\nprovisioning-4063                    7s          Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\" already present on machine\nprovisioning-4063                    6s          Normal    Created                      pod/csi-hostpathplugin-0                                    Created container node-driver-registrar\nprovisioning-4063                    5s          Normal    Started                      pod/csi-hostpathplugin-0                                    Started container node-driver-registrar\nprovisioning-4063                    5s          Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\" already present on machine\nprovisioning-4063                    5s          Normal    Created                      pod/csi-hostpathplugin-0                                    Created container hostpath\nprovisioning-4063                    4s          Normal    Started                      pod/csi-hostpathplugin-0                                    Started container hostpath\nprovisioning-4063                    4s          Normal    Pulled                       pod/csi-hostpathplugin-0                                    Container image \"quay.io/k8scsi/livenessprobe:v1.1.0\" already present on machine\nprovisioning-4063                    4s          Normal    Created                      pod/csi-hostpathplugin-0                                    Created container liveness-probe\nprovisioning-4063                    2s          Normal    Started                      pod/csi-hostpathplugin-0                                    Started container liveness-probe\nprovisioning-4063                    9s          Normal    SuccessfulCreate             statefulset/csi-hostpathplugin                              create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\nprovisioning-4063                    9s          Normal    ExternalProvisioning         persistentvolumeclaim/csi-hostpathtg256                     waiting for a volume to be created, either by external provisioner \"csi-hostpath-provisioning-4063\" or manually created by system administrator\nprovisioning-4063                    3s          Normal    Provisioning                 persistentvolumeclaim/csi-hostpathtg256                     External provisioner is provisioning volume for claim \"provisioning-4063/csi-hostpathtg256\"\nprovisioning-4063                    3s          Normal    ProvisioningSucceeded        persistentvolumeclaim/csi-hostpathtg256                     Successfully provisioned volume pvc-2c33ac2c-06ae-48c4-9c0a-d124413d9cf7\nprovisioning-4063                    6s          Normal    Pulled                       pod/csi-snapshotter-0                                       Container image \"quay.io/k8scsi/csi-snapshotter:v2.0.0\" already present on machine\nprovisioning-4063                    6s          Normal    Created                      pod/csi-snapshotter-0                                       Created container csi-snapshotter\nprovisioning-4063                    4s          Normal    Started                      pod/csi-snapshotter-0                                       Started container csi-snapshotter\nprovisioning-4063                    9s          Normal    SuccessfulCreate             statefulset/csi-snapshotter                                 create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful\nprovisioning-4063                    0s          Normal    SuccessfulAttachVolume       pod/pod-subpath-test-dynamicpv-hg6n                         AttachVolume.Attach succeeded for volume \"pvc-2c33ac2c-06ae-48c4-9c0a-d124413d9cf7\"\nproxy-8892                           8s          Normal    Scheduled                    pod/proxy-service-chhwb-5p8dz                               Successfully assigned proxy-8892/proxy-service-chhwb-5p8dz to kind-worker\nproxy-8892                           4s          Normal    Pulled                       pod/proxy-service-chhwb-5p8dz                               Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nproxy-8892                           4s          Normal    Created                      pod/proxy-service-chhwb-5p8dz                               Created container proxy-service-chhwb\nproxy-8892                           3s          Normal    Started                      pod/proxy-service-chhwb-5p8dz                               Started container proxy-service-chhwb\nproxy-8892                           9s          Normal    SuccessfulCreate             replicationcontroller/proxy-service-chhwb                   Created pod: proxy-service-chhwb-5p8dz\nsecurity-context-test-7441           11s         Normal    Scheduled                    pod/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d     Successfully assigned security-context-test-7441/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d to kind-worker\nsecurity-context-test-7441           10s         Normal    Pulled                       pod/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d     Container image \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\" already present on machine\nsecurity-context-test-7441           10s         Normal    Created                      pod/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d     Created container alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d\nsecurity-context-test-7441           9s          Normal    Started                      pod/alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d     Started container alpine-nnp-nil-6247ddf1-49f6-4bc5-a499-7f36e918507d\nservices-2366                        31s         Normal    Scheduled                    pod/pod1                                                    Successfully assigned services-2366/pod1 to kind-worker\nservices-2366                        28s         Normal    Pulled                       pod/pod1                                                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-2366                        28s         Normal    Created                      pod/pod1                                                    Created container pause\nservices-2366                        27s         Normal    Started                      pod/pod1                                                    Started container pause\nservices-2366                        1s          Normal    Killing                      pod/pod1                                                    Stopping container pause\nservices-2366                        16s         Normal    Scheduled                    pod/pod2                                                    Successfully assigned services-2366/pod2 to kind-worker2\nservices-2366                        14s         Normal    Pulled                       pod/pod2                                                    Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-2366                        14s         Normal    Created                      pod/pod2                                                    Created container pause\nservices-2366                        13s         Normal    Started                      pod/pod2                                                    Started container pause\nservices-2366                        1s          Normal    Killing                      pod/pod2                                                    Stopping container pause\nservices-8847                        5s          Normal    Scheduled                    pod/hostexec                                                Successfully assigned services-8847/hostexec to kind-worker\nservices-8847                        2s          Normal    Pulled                       pod/hostexec                                                Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\nservices-8847                        2s          Normal    Created                      pod/hostexec                                                Created container agnhost\nservices-8847                        1s          Normal    Started                      pod/hostexec                                                Started container agnhost\nservices-9440                        2s          Normal    Scheduled                    pod/pod1                                                    Successfully assigned services-9440/pod1 to kind-worker\nstatefulset-1314                     30s         Normal    WaitForFirstConsumer         persistentvolumeclaim/datadir-ss-0                          waiting for first consumer to be created before binding\nstatefulset-1314                     30s         Normal    ExternalProvisioning         persistentvolumeclaim/datadir-ss-0                          waiting for a volume to be created, either by external provisioner \"rancher.io/local-path\" or manually created by system administrator\nstatefulset-1314                     30s         Normal    Provisioning                 persistentvolumeclaim/datadir-ss-0                          External provisioner is provisioning volume for claim \"statefulset-1314/datadir-ss-0\"\nstatefulset-1314                     19s         Normal    ProvisioningSucceeded        persistentvolumeclaim/datadir-ss-0                          Successfully provisioned volume pvc-2a5ae533-7ba5-4a0c-a04a-5e5484bf85bb\nstatefulset-1314                     30s         Warning   FailedScheduling             pod/ss-0                                                    persistentvolumeclaim \"datadir-ss-0\" not found\nstatefulset-1314                     18s         Normal    Scheduled                    pod/ss-0                                                    Successfully assigned statefulset-1314/ss-0 to kind-worker\nstatefulset-1314                     16s         Normal    Pulling                      pod/ss-0                                                    Pulling image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1314                     1s          Normal    Pulled                       pod/ss-0                                                    Successfully pulled image \"docker.io/library/httpd:2.4.38-alpine\"\nstatefulset-1314                     1s          Normal    Created                      pod/ss-0                                                    Created container webserver\nstatefulset-1314                     30s         Normal    SuccessfulCreate             statefulset/ss                                              create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\nstatefulset-1314                     30s         Normal    SuccessfulCreate             statefulset/ss                                              create Pod ss-0 in StatefulSet ss successful\nstatefulset-742                      15s         Normal    WaitForFirstConsumer         persistentvolumeclaim/datadir-ss-0                          waiting for first consumer to be created before binding\nstatefulset-742                      15s         Normal    ExternalProvisioning         persistentvolumeclaim/datadir-ss-0                          waiting for a volume to be created, either by external provisioner \"rancher.io/local-path\" or manually created by system administrator\nstatefulset-742                      15s         Normal    Provisioning                 persistentvolumeclaim/datadir-ss-0                          External provisioner is provisioning volume for claim \"statefulset-742/datadir-ss-0\"\nstatefulset-742                      6s          Normal    ProvisioningSucceeded        persistentvolumeclaim/datadir-ss-0                          Successfully provisioned volume pvc-cb3c702b-655f-4cd1-b586-5b359f48624d\nstatefulset-742                      5s          Normal    Scheduled                    pod/ss-0                                                    Successfully assigned statefulset-742/ss-0 to kind-worker2\nstatefulset-742                      2s          Normal    Pulled                       pod/ss-0                                                    Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\nstatefulset-742                      2s          Normal    Created                      pod/ss-0                                                    Created container webserver\nstatefulset-742                      1s          Normal    Started                      pod/ss-0                                                    Started container webserver\nstatefulset-742                      15s         Normal    SuccessfulCreate             statefulset/ss                                              create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\nstatefulset-742                      15s         Normal    SuccessfulCreate             statefulset/ss                                              create Pod ss-0 in StatefulSet ss successful\nvolume-6476                          103s        Normal    Pulled                       pod/hostpath-symlink-prep-volume-6476                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6476                          102s        Normal    Created                      pod/hostpath-symlink-prep-volume-6476                       Created container init-volume-volume-6476\nvolume-6476                          102s        Normal    Started                      pod/hostpath-symlink-prep-volume-6476                       Started container init-volume-volume-6476\nvolume-6476                          1s          Normal    Pulled                       pod/hostpath-symlink-prep-volume-6476                       Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6476                          38s         Normal    Pulled                       pod/hostpathsymlink-client                                  Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6476                          38s         Normal    Created                      pod/hostpathsymlink-client                                  Created container hostpathsymlink-client\nvolume-6476                          38s         Normal    Started                      pod/hostpathsymlink-client                                  Started container hostpathsymlink-client\nvolume-6476                          19s         Normal    Killing                      pod/hostpathsymlink-client                                  Stopping container hostpathsymlink-client\nvolume-6476                          83s         Normal    Pulled                       pod/hostpathsymlink-injector                                Container image \"docker.io/library/busybox:1.29\" already present on machine\nvolume-6476                          83s         Normal    Created                      pod/hostpathsymlink-injector                                Created container hostpathsymlink-injector\nvolume-6476                          83s         Normal    Started                      pod/hostpathsymlink-injector                                Started container hostpathsymlink-injector\nvolume-6476                          66s         Normal    Killing                      pod/hostpathsymlink-injector                                Stopping container hostpathsymlink-injector\n"
Jan 14 21:31:15.456: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config get horizontalpodautoscalers --all-namespaces'
Jan 14 21:31:15.661: INFO: stderr: ""
Jan 14 21:31:15.661: INFO: stdout: "NAMESPACE      NAME             REFERENCE         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE\nkubectl-8260   hpa2mjn46qfpsp   something/cross   <unknown>/80%   1         3         0          0s\n"
Jan 14 21:31:15.689: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config get jobs --all-namespaces'
Jan 14 21:31:15.871: INFO: stderr: ""
Jan 14 21:31:15.871: INFO: stdout: "NAMESPACE      NAME             COMPLETIONS   DURATION   AGE\njob-3875       adopt-release    0/4           25s        25s\nkubectl-8260   job1mjn46qfpsp   0/1           0s         0s\n"
... skipping 62 lines ...
test/e2e/kubectl/framework.go:23
  kubectl get output
  test/e2e/kubectl/kubectl.go:424
    should contain custom columns for each resource
    test/e2e/kubectl/kubectl.go:425
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client kubectl get output should contain custom columns for each resource","total":-1,"completed":8,"skipped":66,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Zone Support
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 62 lines ...
  test/e2e/framework/framework.go:175
Jan 14 21:31:21.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-8454" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":9,"skipped":74,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:21.642: INFO: Only supported for providers [openstack] (not skeleton)
[AfterEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:31:21.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 138 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    test/e2e/storage/testsuites/base.go:94
      should create read-only inline ephemeral volume
      test/e2e/storage/testsuites/ephemeral.go:116
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":4,"skipped":63,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
• [SLOW TEST:42.943 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  test/e2e/apps/cronjob.go:195
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":7,"skipped":21,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:29.933: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 128 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should store data
      test/e2e/storage/testsuites/volumes.go:152
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":3,"skipped":36,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 22 lines ...
• [SLOW TEST:21.851 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should release NodePorts on delete
  test/e2e/network/service.go:1873
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":2,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:31.319: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:175
Jan 14 21:31:31.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 95 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:161
    should function for client IP based session affinity: http [LinuxOnly]
    test/e2e/network/networking.go:264
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]","total":-1,"completed":6,"skipped":65,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:32.249: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/framework/framework.go:175
Jan 14 21:31:32.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 93 lines ...
test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":8,"skipped":63,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:31:19.473: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 26 lines ...
• [SLOW TEST:18.374 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  test/e2e/node/security_context.go:88
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":9,"skipped":63,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 96 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] version v1
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 368 lines ...
test/e2e/network/framework.go:23
  version v1
  test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":6,"skipped":42,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] vsphere statefulset
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 175 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: inline ephemeral CSI volume] ephemeral
    test/e2e/storage/testsuites/base.go:94
      should support two pods which share the same volume
      test/e2e/storage/testsuites/ephemeral.go:140
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume","total":-1,"completed":5,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:46.405: INFO: Driver local doesn't support ntfs -- skipping
... skipping 15 lines ...
      Driver local doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:153
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":56,"failed":0}
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:31:01.626: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 45 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:161
    should be able to handle large requests: http
    test/e2e/network/networking.go:299
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","total":-1,"completed":6,"skipped":56,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:48.112: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 84 lines ...
• [SLOW TEST:20.345 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":5,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [sig-auth] Metadata Concealment
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 48 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:688
[It] should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:685
STEP: creating service multi-endpoint-test in namespace services-9440
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9440 to expose endpoints map[]
Jan 14 21:31:11.949: INFO: Get endpoints failed (9.068904ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan 14 21:31:12.978: INFO: successfully validated that service multi-endpoint-test in namespace services-9440 exposes endpoints map[] (1.038993241s elapsed)
STEP: Creating pod pod1 in namespace services-9440
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9440 to expose endpoints map[pod1:[100]]
Jan 14 21:31:17.186: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.160564324s elapsed, will retry)
Jan 14 21:31:22.364: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.338723259s elapsed, will retry)
Jan 14 21:31:27.462: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (14.436186232s elapsed, will retry)
... skipping 22 lines ...
• [SLOW TEST:38.419 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":-1,"completed":9,"skipped":72,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:50.108: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 155 lines ...
  test/e2e/storage/persistent_volumes-local.go:187
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:204
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:233
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":6,"skipped":50,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:12.252 seconds]
[sig-storage] Secrets
test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 42 lines ...
test/e2e/common/networking.go:26
  Granular Checks: Pods
  test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":130,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:53.648: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:31:53.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 47 lines ...
• [SLOW TEST:8.292 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":66,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:56.422: INFO: Only supported for providers [gce gke] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:175
Jan 14 21:31:56.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 158 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:440
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":6,"skipped":20,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:31:57.069: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 39 lines ...
      Only supported for providers [vsphere] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1383
------------------------------
SSS
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":13,"failed":0}
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:31:34.034: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 18 lines ...
• [SLOW TEST:25.726 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  test/e2e/apimachinery/resource_quota.go:559
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":5,"skipped":13,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 92 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:94
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:193
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":3,"skipped":10,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:32:04.119: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 76 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: cinder]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [openstack] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1080
------------------------------
... skipping 95 lines ...
• [SLOW TEST:16.188 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":137,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:32:09.854: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 6 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [gce gke] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1255
------------------------------
... skipping 87 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:94
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path","total":-1,"completed":10,"skipped":69,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:32:10.718: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 151 lines ...
Jan 14 21:30:56.704: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil
Jan 14 21:30:56.713: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-pnvs8] to have phase Bound
Jan 14 21:30:56.721: INFO: PersistentVolumeClaim pvc-pnvs8 found but phase is Pending instead of Bound.
Jan 14 21:30:58.726: INFO: PersistentVolumeClaim pvc-pnvs8 found but phase is Pending instead of Bound.
Jan 14 21:31:00.729: INFO: PersistentVolumeClaim pvc-pnvs8 found and phase=Bound (4.015937375s)
STEP: checking for CSIInlineVolumes feature
Jan 14 21:31:44.789: INFO: Error getting logs for pod csi-inline-volume-rzt9m: the server rejected our request for an unknown reason (get pods csi-inline-volume-rzt9m)
STEP: Deleting pod csi-inline-volume-rzt9m in namespace csi-mock-volumes-7850
WARNING: pod log: csi-inline-volume-rzt9m/csi-volume-tester: pods "csi-inline-volume-rzt9m" not found
STEP: Deleting the previously created pod
Jan 14 21:32:04.814: INFO: Deleting pod "pvc-volume-tester-98vdt" in namespace "csi-mock-volumes-7850"
Jan 14 21:32:04.826: INFO: Wait up to 5m0s for pod "pvc-volume-tester-98vdt" to be fully deleted
WARNING: pod log: pvc-volume-tester-98vdt/volume-tester: pods "pvc-volume-tester-98vdt" not found
STEP: Checking CSI driver logs
Jan 14 21:32:12.857: INFO: CSI driver logs:
mock driver started
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/Probe","Request":{},"Response":{"ready":{"value":true}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7850","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7850","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Service":{"type":1}}},{"Type":{"VolumeExpansion":{"type":1}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":3}}},{"Type":{"Rpc":{"type":4}}},{"Type":{"Rpc":{"type":6}}},{"Type":{"Rpc":{"type":5}}},{"Type":{"Rpc":{"type":8}}},{"Type":{"Rpc":{"type":7}}},{"Type":{"Rpc":{"type":2}}},{"Type":{"Rpc":{"type":9}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/CreateVolume","Request":{"name":"pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7","capacity_range":{"required_bytes":1073741824},"volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}}]},"Response":{"volume":{"capacity_bytes":1073741824,"volume_id":"4","volume_context":{"name":"pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7"}}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Identity/GetPluginInfo","Request":{},"Response":{"name":"csi-mock-csi-mock-volumes-7850","vendor_version":"0.3.0","manifest":{"url":"https://github.com/kubernetes-csi/csi-test/mock"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetInfo","Request":{},"Response":{"node_id":"csi-mock-csi-mock-volumes-7850","max_volumes_per_node":2},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerPublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-7850","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7","storage.kubernetes.io/csiProvisionerIdentity":"1579037459380-8081-csi-mock-csi-mock-volumes-7850"}},"Response":{"publish_context":{"device":"/dev/mock","readonly":"false"}},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeStageVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7/globalmount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"name":"pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7","storage.kubernetes.io/csiProvisionerIdentity":"1579037459380-8081-csi-mock-csi-mock-volumes-7850"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodePublishVolume","Request":{"volume_id":"4","publish_context":{"device":"/dev/mock","readonly":"false"},"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7/globalmount","target_path":"/var/lib/kubelet/pods/4603d3c5-91e6-42b6-8d36-625f9ec69d89/volumes/kubernetes.io~csi/pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"ext4"}},"access_mode":{"mode":1}},"volume_context":{"csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"pvc-volume-tester-98vdt","csi.storage.k8s.io/pod.namespace":"csi-mock-volumes-7850","csi.storage.k8s.io/pod.uid":"4603d3c5-91e6-42b6-8d36-625f9ec69d89","csi.storage.k8s.io/serviceAccount.name":"default","name":"pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7","storage.kubernetes.io/csiProvisionerIdentity":"1579037459380-8081-csi-mock-csi-mock-volumes-7850"}},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetVolumeStats","Request":{"volume_id":"4","volume_path":"/var/lib/kubelet/pods/4603d3c5-91e6-42b6-8d36-625f9ec69d89/volumes/kubernetes.io~csi/pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7/mount"},"Response":{"usage":[{"total":1073741824,"unit":1}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/4603d3c5-91e6-42b6-8d36-625f9ec69d89/volumes/kubernetes.io~csi/pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7/mount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeGetCapabilities","Request":{},"Response":{"capabilities":[{"Type":{"Rpc":{}}},{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]},"Error":""}
gRPCCall: {"Method":"/csi.v1.Node/NodeUnstageVolume","Request":{"volume_id":"4","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-8459ee6b-d1dc-4cd8-814c-9341399138b7/globalmount"},"Response":{},"Error":""}
gRPCCall: {"Method":"/csi.v1.Controller/ControllerUnpublishVolume","Request":{"volume_id":"4","node_id":"csi-mock-csi-mock-volumes-7850"},"Response":{},"Error":""}

Jan 14 21:32:12.857: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: 4603d3c5-91e6-42b6-8d36-625f9ec69d89
Jan 14 21:32:12.857: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Jan 14 21:32:12.857: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jan 14 21:32:12.857: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-98vdt
Jan 14 21:32:12.857: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-7850
... skipping 43 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:296
    should be passed when podInfoOnMount=true
    test/e2e/storage/csi_mock_volume.go:346
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":5,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:32:15.190: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:32:15.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 51 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:94
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:148
------------------------------
... skipping 101 lines ...
• [SLOW TEST:57.297 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  test/e2e/apps/deployment.go:113
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":12,"skipped":75,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:32:17.800: INFO: Driver local doesn't support ntfs -- skipping
... skipping 37 lines ...
      Distro debian doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:159
------------------------------
S
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":4,"skipped":29,"failed":0}
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 14 21:30:45.293: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 61 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:680
    should adopt matching orphans and release non-matching pods
    test/e2e/apps/statefulset.go:159
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":5,"skipped":29,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 70 lines ...
  test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":7,"skipped":73,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:32:21.747: INFO: Driver vsphere doesn't support ext3 -- skipping
... skipping 194 lines ...
• [SLOW TEST:12.157 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":94,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:32:22.928: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 127 lines ...
• [SLOW TEST:30.940 seconds]
[sig-api-machinery] Aggregator
test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":7,"skipped":51,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 78 lines ...
  test/e2e/storage/persistent_volumes-local.go:187
    Two pods mounting a local volume at the same time
    test/e2e/storage/persistent_volumes-local.go:243
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:244
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2","total":-1,"completed":9,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:32:24.906: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan 14 21:32:24.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 203 lines ...
  test/e2e/storage/csi_volumes.go:55
    [Testpattern: Dynamic PV (block volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:94
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:332
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":8,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:95
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 59 lines ...
• [SLOW TEST:8.212 seconds]
[sig-storage] HostPath
test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":106,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:95
Jan 14 21:32:30.029: INFO: Driver local doesn't support ext3 -- skipping
... skipping 28 lines ...
  test/e2e/kubectl/kubectl.go:280
[It] should check if cluster-info dump succeeds
  test/e2e/kubectl/kubectl.go:1150
STEP: running cluster-info dump
Jan 14 21:32:30.081: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:32768 --kubeconfig=/root/.kube/kind-test-config cluster-info dump'
Jan 14 21:32:30.556: INFO: stderr: ""
Jan 14 21:32:30.556: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/nodes\",\n        \"resourceVersion\": \"12192\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane\",\n                \"selfLink\": \"/api/v1/nodes/kind-control-plane\",\n                \"uid\": \"463295e6-ff31-4747-8659-7f8e155ce671\",\n                \"resourceVersion\": \"465\",\n                \"creationTimestamp\": \"2020-01-14T21:27:25Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-control-plane\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"annotations\": {\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                },\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubeadm\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:28Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:kubeadm.alpha.kubernetes.io/cri-socket\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:node-role.kubernetes.io/master\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:58Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:volumes.kubernetes.io/controller-managed-attach-detach\": {},\n                                    \".\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:beta.kubernetes.io/arch\": {},\n                                    \"f:beta.kubernetes.io/os\": {},\n                                    \"f:kubernetes.io/arch\": {},\n                                    \"f:kubernetes.io/hostname\": {},\n                                    \"f:kubernetes.io/os\": {},\n                                    \".\": {}\n                                }\n                            },\n                            \"f:status\": {\n                                \"f:addresses\": {\n                                    \"k:{\\\"type\\\":\\\"Hostname\\\"}\": {\n                                        \"f:address\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"InternalIP\\\"}\": {\n                                        \"f:address\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:allocatable\": {\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {},\n                                    \".\": {}\n                                },\n                                \"f:capacity\": {\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {},\n                                    \".\": {}\n                                },\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"DiskPressure\\\"}\": {\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\": {\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"PIDPressure\\\"}\": {\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:daemonEndpoints\": {\n                                    \"f:kubeletEndpoint\": {\n                                        \"f:Port\": {}\n                                    }\n                                },\n                                \"f:images\": {},\n                                \"f:nodeInfo\": {\n                                    \"f:architecture\": {},\n                                    \"f:bootID\": {},\n                                    \"f:containerRuntimeVersion\": {},\n                                    \"f:kernelVersion\": {},\n                                    \"f:kubeProxyVersion\": {},\n                                    \"f:kubeletVersion\": {},\n                                    \"f:machineID\": {},\n                                    \"f:operatingSystem\": {},\n                                    \"f:osImage\": {},\n                                    \"f:systemUUID\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:58Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:node.alpha.kubernetes.io/ttl\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:podCIDR\": {},\n                                \"f:podCIDRs\": {\n                                    \"v:\\\"10.244.0.0/24\\\"\": {},\n                                    \".\": {}\n                                },\n                                \"f:taints\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.0.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.0.0/24\"\n                ],\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:27:58Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:27:21Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:27:58Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:27:21Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:27:58Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:27:21Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:27:58Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:27:58Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.17.0.2\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-control-plane\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"93bedd858bf64649a53eaec233be7283\",\n                    \"systemUUID\": \"c854ed34-d9d3-4dfc-a29a-2b6ee4afa741\",\n                    \"bootID\": \"69e06f15-02c9-4431-925a-ebcc758292f9\",\n                    \"kernelVersion\": \"4.15.0-1044-gke\",\n                    \"osImage\": \"Ubuntu 19.10\",\n                    \"containerRuntimeVersion\": \"containerd://1.3.2\",\n                    \"kubeletVersion\": \"v1.18.0-alpha.1.681+c12a96f7f64648\",\n                    \"kubeProxyVersion\": \"v1.18.0-alpha.1.681+c12a96f7f64648\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.3-0\"\n                        ],\n                        \"sizeBytes\": 289997247\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 196168859\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 181218468\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 123638266\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:0.5.4\"\n                        ],\n                        \"sizeBytes\": 113207016\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 102247578\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/debian-base:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 53884301\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.5\"\n                        ],\n                        \"sizeBytes\": 41705951\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.11\"\n                        ],\n                        \"sizeBytes\": 36513375\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.1\"\n                        ],\n                        \"sizeBytes\": 746479\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker\",\n                \"selfLink\": \"/api/v1/nodes/kind-worker\",\n                \"uid\": \"fc8feb08-f0a7-4729-84ae-e46179484a52\",\n                \"resourceVersion\": \"11434\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"topology.hostpath.csi/node\": \"kind-worker\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-4014\\\":\\\"kind-worker\\\",\\\"csi-hostpath-provisioning-7976\\\":\\\"kind-worker\\\",\\\"csi-mock-csi-mock-volumes-6885\\\":\\\"csi-mock-csi-mock-volumes-6885\\\"}\",\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.2.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.2.0/24\"\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:31:52Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:28:02Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:31:52Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:28:02Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:31:52Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:28:02Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:31:52Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:28:22Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.17.0.4\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-worker\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"66774eff97144b0cae3a506511e7aaee\",\n                    \"systemUUID\": \"4fef80c4-62fe-4caf-a191-2eb36337bcc7\",\n                    \"bootID\": \"69e06f15-02c9-4431-925a-ebcc758292f9\",\n                    \"kernelVersion\": \"4.15.0-1044-gke\",\n                    \"osImage\": \"Ubuntu 19.10\",\n                    \"containerRuntimeVersion\": \"containerd://1.3.2\",\n                    \"kubeletVersion\": \"v1.18.0-alpha.1.681+c12a96f7f64648\",\n                    \"kubeProxyVersion\": \"v1.18.0-alpha.1.681+c12a96f7f64648\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.3-0\"\n                        ],\n                        \"sizeBytes\": 289997247\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 196168859\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 181218468\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 123638266\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:0.5.4\"\n                        ],\n                        \"sizeBytes\": 113207016\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 102247578\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb\",\n                            \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\n                        ],\n                        \"sizeBytes\": 85425365\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/fedora@sha256:d4f7df6b691d61af6cee7328f82f1d8afdef63bc38f58516858ae3045083924a\",\n                            \"docker.io/library/fedora:latest\"\n                        ],\n                        \"sizeBytes\": 66777964\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/debian-base:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 53884301\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.5\"\n                        ],\n                        \"sizeBytes\": 41705951\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                            \"docker.io/library/httpd:2.4.38-alpine\"\n                        ],\n                        \"sizeBytes\": 40765017\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.11\"\n                        ],\n                        \"sizeBytes\": 36513375\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\n                        ],\n                        \"sizeBytes\": 19239776\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 18493827\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-attacher:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 18447803\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-resizer:v0.4.0\"\n                        ],\n                        \"sizeBytes\": 18421801\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\",\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\n                        ],\n                        \"sizeBytes\": 17444032\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\n                        ],\n                        \"sizeBytes\": 13344760\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19\",\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\n                        ],\n                        \"sizeBytes\": 10198788\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\n                        ],\n                        \"sizeBytes\": 7676183\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/mock-driver:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 7377931\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                            \"docker.io/library/nginx:1.14-alpine\"\n                        ],\n                        \"sizeBytes\": 6978806\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/livenessprobe:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 6690548\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd\",\n                            \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\n                        ],\n                        \"sizeBytes\": 4331310\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411\",\n                            \"gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0\"\n                        ],\n                        \"sizeBytes\": 3054649\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc\",\n                            \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\n                        ],\n                        \"sizeBytes\": 1804628\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e\",\n                            \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\n                        ],\n                        \"sizeBytes\": 1791163\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.1\"\n                        ],\n                        \"sizeBytes\": 746479\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n                            \"docker.io/library/busybox:1.29\"\n                        ],\n                        \"sizeBytes\": 732685\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2\",\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\n                        ],\n                        \"sizeBytes\": 599341\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d\",\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0\"\n                        ],\n                        \"sizeBytes\": 539309\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker2\",\n                \"selfLink\": \"/api/v1/nodes/kind-worker2\",\n                \"uid\": \"f4169e00-dcf3-4dbe-a5a4-9a518e978730\",\n                \"resourceVersion\": \"12147\",\n                \"creationTimestamp\": \"2020-01-14T21:28:01Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker2\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"topology.hostpath.csi/node\": \"kind-worker2\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-3610\\\":\\\"kind-worker2\\\",\\\"csi-hostpath-provisioning-4063\\\":\\\"kind-worker2\\\",\\\"csi-hostpath-provisioning-548\\\":\\\"kind-worker2\\\",\\\"csi-hostpath-volume-expand-3601\\\":\\\"kind-worker2\\\",\\\"csi-hostpath-volume-expand-5610\\\":\\\"kind-worker2\\\",\\\"csi-hostpath-volumemode-7660\\\":\\\"kind-worker2\\\",\\\"csi-mock-csi-mock-volumes-5275\\\":\\\"csi-mock-csi-mock-volumes-5275\\\",\\\"csi-mock-csi-mock-volumes-7542\\\":\\\"csi-mock-csi-mock-volumes-7542\\\",\\\"csi-mock-csi-mock-volumes-7850\\\":\\\"csi-mock-csi-mock-volumes-7850\\\",\\\"csi-mock-csi-mock-volumes-8415\\\":\\\"csi-mock-csi-mock-volumes-8415\\\",\\\"csi-mock-csi-mock-volumes-8526\\\":\\\"csi-mock-csi-mock-volumes-8526\\\",\\\"csi-mock-csi-mock-volumes-8822\\\":\\\"csi-mock-csi-mock-volumes-8822\\\"}\",\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"/run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                },\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubeadm\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:01Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:kubeadm.alpha.kubernetes.io/cri-socket\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:32:18Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:node.alpha.kubernetes.io/ttl\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:podCIDR\": {},\n                                \"f:podCIDRs\": {\n                                    \"v:\\\"10.244.1.0/24\\\"\": {},\n                                    \".\": {}\n                                }\n                            },\n                            \"f:status\": {\n                                \"f:volumesAttached\": {}\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:32:29Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:csi.volume.kubernetes.io/nodeid\": {},\n                                    \"f:volumes.kubernetes.io/controller-managed-attach-detach\": {},\n                                    \".\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:beta.kubernetes.io/arch\": {},\n                                    \"f:beta.kubernetes.io/os\": {},\n                                    \"f:kubernetes.io/arch\": {},\n                                    \"f:kubernetes.io/hostname\": {},\n                                    \"f:kubernetes.io/os\": {},\n                                    \"f:topology.hostpath.csi/node\": {},\n                                    \".\": {}\n                                }\n                            },\n                            \"f:status\": {\n                                \"f:addresses\": {\n                                    \"k:{\\\"type\\\":\\\"Hostname\\\"}\": {\n                                        \"f:address\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"InternalIP\\\"}\": {\n                                        \"f:address\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:allocatable\": {\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {},\n                                    \".\": {}\n                                },\n                                \"f:capacity\": {\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {},\n                                    \".\": {}\n                                },\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"DiskPressure\\\"}\": {\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\": {\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"PIDPressure\\\"}\": {\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:daemonEndpoints\": {\n                                    \"f:kubeletEndpoint\": {\n                                        \"f:Port\": {}\n                                    }\n                                },\n                                \"f:images\": {},\n                                \"f:nodeInfo\": {\n                                    \"f:architecture\": {},\n                                    \"f:bootID\": {},\n                                    \"f:containerRuntimeVersion\": {},\n                                    \"f:kernelVersion\": {},\n                                    \"f:kubeProxyVersion\": {},\n                                    \"f:kubeletVersion\": {},\n                                    \"f:machineID\": {},\n                                    \"f:operatingSystem\": {},\n                                    \"f:osImage\": {},\n                                    \"f:systemUUID\": {}\n                                },\n                                \"f:volumesInUse\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.1.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.1.0/24\"\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:32:22Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:28:01Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:32:22Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:28:01Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:32:22Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:28:01Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2020-01-14T21:32:22Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:28:21Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.17.0.3\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-worker2\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"7907d9decb194e7b917618b5983c100d\",\n                    \"systemUUID\": \"d69c371e-f7ce-4f4b-bddf-2e89c0d4fc41\",\n                    \"bootID\": \"69e06f15-02c9-4431-925a-ebcc758292f9\",\n                    \"kernelVersion\": \"4.15.0-1044-gke\",\n                    \"osImage\": \"Ubuntu 19.10\",\n                    \"containerRuntimeVersion\": \"containerd://1.3.2\",\n                    \"kubeletVersion\": \"v1.18.0-alpha.1.681+c12a96f7f64648\",\n                    \"kubeProxyVersion\": \"v1.18.0-alpha.1.681+c12a96f7f64648\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646\",\n                            \"k8s.gcr.io/etcd:3.4.3-0\",\n                            \"k8s.gcr.io/etcd:3.4.3\"\n                        ],\n                        \"sizeBytes\": 289997247\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 196168859\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 181218468\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 123638266\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:0.5.4\"\n                        ],\n                        \"sizeBytes\": 113207016\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.1.681_c12a96f7f64648\"\n                        ],\n                        \"sizeBytes\": 102247578\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb\",\n                            \"gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0\"\n                        ],\n                        \"sizeBytes\": 85425365\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/fedora@sha256:d4f7df6b691d61af6cee7328f82f1d8afdef63bc38f58516858ae3045083924a\",\n                            \"docker.io/library/fedora:latest\"\n                        ],\n                        \"sizeBytes\": 66777964\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/debian-base:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 53884301\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.5\"\n                        ],\n                        \"sizeBytes\": 41705951\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                            \"docker.io/library/httpd:2.4.38-alpine\"\n                        ],\n                        \"sizeBytes\": 40765017\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.11\"\n                        ],\n                        \"sizeBytes\": 36513375\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55\",\n                            \"gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17\"\n                        ],\n                        \"sizeBytes\": 25311280\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-provisioner:v1.5.0\"\n                        ],\n                        \"sizeBytes\": 19239776\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-snapshotter:v2.0.0\"\n                        ],\n                        \"sizeBytes\": 18493827\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-attacher:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 18447803\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-resizer:v0.4.0\"\n                        ],\n                        \"sizeBytes\": 18421801\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\",\n                            \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\"\n                        ],\n                        \"sizeBytes\": 17444032\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/hostpathplugin:v1.3.0-rc1\"\n                        ],\n                        \"sizeBytes\": 13344760\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19\",\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\n                        ],\n                        \"sizeBytes\": 10198788\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/csi-node-driver-registrar:v1.2.0\"\n                        ],\n                        \"sizeBytes\": 7676183\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/mock-driver:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 7377931\n                    },\n                    {\n                        \"names\": [\n                            \"quay.io/k8scsi/livenessprobe:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 6690548\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd\",\n                            \"gcr.io/kubernetes-e2e-test-images/dnsutils:1.1\"\n                        ],\n                        \"sizeBytes\": 4331310\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc\",\n                            \"gcr.io/kubernetes-e2e-test-images/nautilus:1.0\"\n                        ],\n                        \"sizeBytes\": 1804628\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e\",\n                            \"gcr.io/kubernetes-e2e-test-images/test-webserver:1.0\"\n                        ],\n                        \"sizeBytes\": 1791163\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.1\"\n                        ],\n                        \"sizeBytes\": 746479\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n                            \"docker.io/library/busybox:1.29\"\n                        ],\n                        \"sizeBytes\": 732685\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2\",\n                            \"gcr.io/kubernetes-e2e-test-images/mounttest:1.0\"\n                        ],\n                        \"sizeBytes\": 599341\n                    }\n                ],\n                \"volumesInUse\": [\n                    \"kubernetes.io/csi/csi-hostpath-volume-expand-5610^56900425-3715-11ea-a84c-42ff42a7f54c\",\n                    \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5275^4\",\n                    \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8822^4\"\n                ],\n                \"volumesAttached\": [\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5275^4\",\n                        \"devicePath\": \"\"\n                    },\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-mock-csi-mock-volumes-8822^4\",\n                        \"devicePath\": \"\"\n                    },\n                    {\n                        \"name\": \"kubernetes.io/csi/csi-hostpath-volume-expand-5610^56900425-3715-11ea-a84c-42ff42a7f54c\",\n                        \"devicePath\": \"\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/events\",\n        \"resourceVersion\": \"12192\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-45qgf.15e9de067ce6a094\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-45qgf.15e9de067ce6a094\",\n                \"uid\": \"e65737e2-2f5a-4ead-b73f-36eee82c8cd6\",\n                \"resourceVersion\": \"466\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-45qgf\",\n                \"uid\": \"0f458487-a615-4729-92be-1253a0e4a65b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"384\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:58Z\",\n            \"count\": 4,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-45qgf.15e9de0a5f9e15af\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-45qgf.15e9de0a5f9e15af\",\n                \"uid\": \"5e95ce99-fcc2-4532-8c04-39c0083f93b4\",\n                \"resourceVersion\": \"477\",\n                \"creationTimestamp\": \"2020-01-14T21:28:00Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-scheduler\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:00Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-45qgf\",\n                \"uid\": \"0f458487-a615-4729-92be-1253a0e4a65b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"388\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-6955765f44-45qgf to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:00Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-45qgf.15e9de0a99a1639b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-45qgf.15e9de0a99a1639b\",\n                \"uid\": \"34583a4e-b407-4589-95fd-f4bf93b1a572\",\n                \"resourceVersion\": \"483\",\n                \"creationTimestamp\": \"2020-01-14T21:28:01Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:01Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-45qgf\",\n                \"uid\": \"0f458487-a615-4729-92be-1253a0e4a65b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"474\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/coredns:1.6.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-45qgf.15e9de0acedeea32\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-45qgf.15e9de0acedeea32\",\n                \"uid\": \"5113c4c4-08c0-46eb-b35c-686e538fca03\",\n                \"resourceVersion\": \"526\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:02Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-45qgf\",\n                \"uid\": \"0f458487-a615-4729-92be-1253a0e4a65b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"474\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-45qgf.15e9de0ae234d30d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-45qgf.15e9de0ae234d30d\",\n                \"uid\": \"c2a69b13-c777-4e14-8614-1f96af72d45a\",\n                \"resourceVersion\": \"535\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:02Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-45qgf\",\n                \"uid\": \"0f458487-a615-4729-92be-1253a0e4a65b\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"474\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-blnrh.15e9de067e289652\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-blnrh.15e9de067e289652\",\n                \"uid\": \"ada4acbc-40bc-4ac8-88c3-6565b4959705\",\n                \"resourceVersion\": \"464\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-blnrh\",\n                \"uid\": \"12c36094-4dbc-4d46-9e3f-0978bc0f3639\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"386\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:58Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-blnrh.15e9de0a5f7bc177\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-blnrh.15e9de0a5f7bc177\",\n                \"uid\": \"0322d207-d587-486e-a4b4-8e4a99d97c94\",\n                \"resourceVersion\": \"475\",\n                \"creationTimestamp\": \"2020-01-14T21:28:00Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-blnrh\",\n                \"uid\": \"12c36094-4dbc-4d46-9e3f-0978bc0f3639\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"401\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-6955765f44-blnrh to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:00Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:00Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-blnrh.15e9de0a95bcde89\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-blnrh.15e9de0a95bcde89\",\n                \"uid\": \"259417d6-9fb4-4218-832c-d39be85c1117\",\n                \"resourceVersion\": \"482\",\n                \"creationTimestamp\": \"2020-01-14T21:28:01Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:01Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-blnrh\",\n                \"uid\": \"12c36094-4dbc-4d46-9e3f-0978bc0f3639\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"473\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/coredns:1.6.5\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-blnrh.15e9de0acd299334\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-blnrh.15e9de0acd299334\",\n                \"uid\": \"e1f19473-b9ba-4f88-bbae-fef6b3c17235\",\n                \"resourceVersion\": \"525\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-blnrh\",\n                \"uid\": \"12c36094-4dbc-4d46-9e3f-0978bc0f3639\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"473\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-blnrh.15e9de0ae60c6135\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44-blnrh.15e9de0ae60c6135\",\n                \"uid\": \"0f44b568-895a-43c7-a5ed-48c38152e6fd\",\n                \"resourceVersion\": \"536\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44-blnrh\",\n                \"uid\": \"12c36094-4dbc-4d46-9e3f-0978bc0f3639\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"473\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44.15e9de067cd3e011\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44.15e9de067cd3e011\",\n                \"uid\": \"be0a7c33-5198-4dbd-91f2-cbb9d58a517e\",\n                \"resourceVersion\": \"392\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44\",\n                \"uid\": \"2f78164f-7a71-4604-9ac5-462c80cae439\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"377\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-6955765f44-45qgf\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44.15e9de067d652c36\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns-6955765f44.15e9de067d652c36\",\n                \"uid\": \"0942d30c-bdd3-40f7-9995-9e739b1597b8\",\n                \"resourceVersion\": \"398\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-6955765f44\",\n                \"uid\": \"2f78164f-7a71-4604-9ac5-462c80cae439\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"377\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-6955765f44-blnrh\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.15e9de067bff4ba9\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/coredns.15e9de067bff4ba9\",\n                \"uid\": \"a17f608e-7ccd-4871-a960-7b9724520402\",\n                \"resourceVersion\": \"383\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"692ea41e-e15a-43e1-bb0e-4911945a514a\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"185\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-6955765f44 to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-2hf8t.15e9de067b87a44f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-2hf8t.15e9de067b87a44f\",\n                \"uid\": \"6ba22a89-123c-4d87-be1c-8f8b4da002bb\",\n                \"resourceVersion\": \"376\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-scheduler\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:43Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-2hf8t\",\n                \"uid\": \"837d47ca-53f0-4316-9c5b-ecc566cad451\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"367\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-2hf8t to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-2hf8t.15e9de06f946e24e\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-2hf8t.15e9de06f946e24e\",\n                \"uid\": \"0aad31aa-f5a3-4b92-85ac-68716101c7a6\",\n                \"resourceVersion\": \"421\",\n                \"creationTimestamp\": \"2020-01-14T21:27:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-2hf8t\",\n                \"uid\": \"837d47ca-53f0-4316-9c5b-ecc566cad451\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"374\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"kindest/kindnetd:0.5.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:45Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-2hf8t.15e9de07140518bb\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-2hf8t.15e9de07140518bb\",\n                \"uid\": \"2e67c3eb-f61a-47d6-b124-b15068192405\",\n                \"resourceVersion\": \"424\",\n                \"creationTimestamp\": \"2020-01-14T21:27:46Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:46Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-2hf8t\",\n                \"uid\": \"837d47ca-53f0-4316-9c5b-ecc566cad451\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"374\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:46Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:46Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-2hf8t.15e9de073840adad\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-2hf8t.15e9de073840adad\",\n                \"uid\": \"0f472cdd-75b4-442b-ad4d-d517a5fd30b4\",\n                \"resourceVersion\": \"429\",\n                \"creationTimestamp\": \"2020-01-14T21:27:46Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-2hf8t\",\n                \"uid\": \"837d47ca-53f0-4316-9c5b-ecc566cad451\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"374\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:46Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:46Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6rhkp.15e9de0ac505dd2d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-6rhkp.15e9de0ac505dd2d\",\n                \"uid\": \"6652a030-8a3f-4981-9dd1-6e0221663c7b\",\n                \"resourceVersion\": \"518\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-scheduler\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:02Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-6rhkp\",\n                \"uid\": \"686881ea-3f52-4876-b501-f2be4e700242\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"510\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-6rhkp to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6rhkp.15e9de0b0319c0a7\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-6rhkp.15e9de0b0319c0a7\",\n                \"uid\": \"15ea78d2-dc94-4a67-8029-0229f854cfab\",\n                \"resourceVersion\": \"539\",\n                \"creationTimestamp\": \"2020-01-14T21:28:03Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:03Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-6rhkp\",\n                \"uid\": \"686881ea-3f52-4876-b501-f2be4e700242\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"514\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"kindest/kindnetd:0.5.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:03Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:03Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6rhkp.15e9de0b92e0db02\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-6rhkp.15e9de0b92e0db02\",\n                \"uid\": \"5f0661d6-f401-4968-8dbc-8b0aa566595e\",\n                \"resourceVersion\": \"551\",\n                \"creationTimestamp\": \"2020-01-14T21:28:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-6rhkp\",\n                \"uid\": \"686881ea-3f52-4876-b501-f2be4e700242\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"514\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6rhkp.15e9de0bb97b6f21\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-6rhkp.15e9de0bb97b6f21\",\n                \"uid\": \"21233cdc-b323-4911-84b7-0574ca8dc6b4\",\n                \"resourceVersion\": \"563\",\n                \"creationTimestamp\": \"2020-01-14T21:28:06Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:06Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-6rhkp\",\n                \"uid\": \"686881ea-3f52-4876-b501-f2be4e700242\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"514\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:06Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-jxzbl.15e9de0aa78b1b52\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-jxzbl.15e9de0aa78b1b52\",\n                \"uid\": \"ac6d31e6-9577-49f1-a2be-bcb801633c0a\",\n                \"resourceVersion\": \"499\",\n                \"creationTimestamp\": \"2020-01-14T21:28:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-jxzbl\",\n                \"uid\": \"ab6159d6-619a-41db-a816-0600a87923a8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"488\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-jxzbl to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-jxzbl.15e9de0af0bada96\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-jxzbl.15e9de0af0bada96\",\n                \"uid\": \"a0247f75-3644-4e44-a5b5-ff900aa0f472\",\n                \"resourceVersion\": \"537\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:02Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-jxzbl\",\n                \"uid\": \"ab6159d6-619a-41db-a816-0600a87923a8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"494\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"kindest/kindnetd:0.5.4\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-jxzbl.15e9de0b930b378b\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-jxzbl.15e9de0b930b378b\",\n                \"uid\": \"e79b06d7-ce4c-4298-8346-1a04e7861eef\",\n                \"resourceVersion\": \"552\",\n                \"creationTimestamp\": \"2020-01-14T21:28:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-jxzbl\",\n                \"uid\": \"ab6159d6-619a-41db-a816-0600a87923a8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"494\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-jxzbl.15e9de0bbb9529be\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet-jxzbl.15e9de0bbb9529be\",\n                \"uid\": \"74fe24d5-45a6-4d82-b3ed-14e38905928b\",\n                \"resourceVersion\": \"564\",\n                \"creationTimestamp\": \"2020-01-14T21:28:06Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-jxzbl\",\n                \"uid\": \"ab6159d6-619a-41db-a816-0600a87923a8\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"494\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:06Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:06Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.15e9de067a882dc6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15e9de067a882dc6\",\n                \"uid\": \"26c7877e-4307-4232-8ca2-94f965fd7d8c\",\n                \"resourceVersion\": \"369\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:43Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"e718f5b5-6f0d-49a2-a4e2-6a6f0b1641a9\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"251\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-2hf8t\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.15e9de0aa6e638f6\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15e9de0aa6e638f6\",\n                \"uid\": \"fe04803e-d51c-4ca5-9f27-3361e48345d6\",\n                \"resourceVersion\": \"496\",\n                \"creationTimestamp\": \"2020-01-14T21:28:01Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:01Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"e718f5b5-6f0d-49a2-a4e2-6a6f0b1641a9\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"434\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-jxzbl\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.15e9de0ac38474ee\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kindnet.15e9de0ac38474ee\",\n                \"uid\": \"6d466a3f-548d-4408-895c-cc81f68e25d3\",\n                \"resourceVersion\": \"515\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"e718f5b5-6f0d-49a2-a4e2-6a6f0b1641a9\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"493\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-6rhkp\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.15e9de02c155e4d7\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15e9de02c155e4d7\",\n                \"uid\": \"c606e074-844b-4963-b7ea-19e2eb7ee4fd\",\n                \"resourceVersion\": \"165\",\n                \"creationTimestamp\": \"2020-01-14T21:27:27Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:27Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Endpoints\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"d5d1735a-0041-43dc-9bd1-fd94797ebe4c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"163\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_271067d6-33cf-4930-9ba7-05996c920976 became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:27Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.15e9de02c1561735\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-controller-manager.15e9de02c1561735\",\n                \"uid\": \"2f90b71b-49ac-4a2e-be9b-caeeece413d6\",\n                \"resourceVersion\": \"166\",\n                \"creationTimestamp\": \"2020-01-14T21:27:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"52c25d25-039a-4e58-854f-bea1dc7371f3\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"164\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_271067d6-33cf-4930-9ba7-05996c920976 became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:27Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-4md69.15e9de0ac53a79bc\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-4md69.15e9de0ac53a79bc\",\n                \"uid\": \"57eee6ee-4fb2-42ca-9017-090c4b6432e1\",\n                \"resourceVersion\": \"521\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-scheduler\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:02Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-4md69\",\n                \"uid\": \"37fe8706-2809-4c8e-a140-851c9064adb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"511\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-4md69 to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-4md69.15e9de0af34380b9\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-4md69.15e9de0af34380b9\",\n                \"uid\": \"048b9b27-b2da-4185-8d0a-c88a9cb5a55d\",\n                \"resourceVersion\": \"538\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-4md69\",\n                \"uid\": \"37fe8706-2809-4c8e-a140-851c9064adb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"517\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-4md69.15e9de0b931ea8bd\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-4md69.15e9de0b931ea8bd\",\n                \"uid\": \"f16920dd-e239-4c51-bf0b-a2705e38ca9d\",\n                \"resourceVersion\": \"553\",\n                \"creationTimestamp\": \"2020-01-14T21:28:05Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:05Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-4md69\",\n                \"uid\": \"37fe8706-2809-4c8e-a140-851c9064adb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"517\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-4md69.15e9de0ba18926ed\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-4md69.15e9de0ba18926ed\",\n                \"uid\": \"b8f17e6b-75b4-4dd5-96cd-fb3cabc91ad0\",\n                \"resourceVersion\": \"558\",\n                \"creationTimestamp\": \"2020-01-14T21:28:05Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:05Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-4md69\",\n                \"uid\": \"37fe8706-2809-4c8e-a140-851c9064adb1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"517\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-rh967.15e9de067e76347d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-rh967.15e9de067e76347d\",\n                \"uid\": \"81e9fa54-580e-467d-906a-39a55bf5478b\",\n                \"resourceVersion\": \"402\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-rh967\",\n                \"uid\": \"ac5c939c-0083-48fe-bdc2-c223e5b29142\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"387\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-rh967 to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-rh967.15e9de069dac1f26\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-rh967.15e9de069dac1f26\",\n                \"uid\": \"95405921-0d4c-42ae-aedf-d5648d0d9a6e\",\n                \"resourceVersion\": \"412\",\n                \"creationTimestamp\": \"2020-01-14T21:27:44Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:44Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-rh967\",\n                \"uid\": \"ac5c939c-0083-48fe-bdc2-c223e5b29142\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"399\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:44Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-rh967.15e9de06d16ee51f\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-rh967.15e9de06d16ee51f\",\n                \"uid\": \"6005074a-e6c8-41b4-9d96-1c10fa62874c\",\n                \"resourceVersion\": \"413\",\n                \"creationTimestamp\": \"2020-01-14T21:27:45Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:45Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-rh967\",\n                \"uid\": \"ac5c939c-0083-48fe-bdc2-c223e5b29142\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"399\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:45Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-rh967.15e9de06d84d98e4\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-rh967.15e9de06d84d98e4\",\n                \"uid\": \"f04e9a72-6922-49fb-a301-acf363a89763\",\n                \"resourceVersion\": \"414\",\n                \"creationTimestamp\": \"2020-01-14T21:27:45Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:45Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-rh967\",\n                \"uid\": \"ac5c939c-0083-48fe-bdc2-c223e5b29142\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"399\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:45Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-sllbk.15e9de0aa779812d\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-sllbk.15e9de0aa779812d\",\n                \"uid\": \"8da32e32-4338-4764-b517-06e7cec23e60\",\n                \"resourceVersion\": \"497\",\n                \"creationTimestamp\": \"2020-01-14T21:28:01Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-sllbk\",\n                \"uid\": \"0fcef59d-80b6-4e1f-8478-2e8bfd13129c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"489\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-sllbk to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-sllbk.15e9de0ad9c524d2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-sllbk.15e9de0ad9c524d2\",\n                \"uid\": \"8664a37a-4ae2-480a-8121-20bfcad893e7\",\n                \"resourceVersion\": \"529\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-sllbk\",\n                \"uid\": \"0fcef59d-80b6-4e1f-8478-2e8bfd13129c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"495\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-sllbk.15e9de0b931cf785\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-sllbk.15e9de0b931cf785\",\n                \"uid\": \"f7ade517-8675-4885-83bd-3ab4983ad13d\",\n                \"resourceVersion\": \"554\",\n                \"creationTimestamp\": \"2020-01-14T21:28:05Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:05Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:fieldPath\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {},\n                                \"f:host\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-sllbk\",\n                \"uid\": \"0fcef59d-80b6-4e1f-8478-2e8bfd13129c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"495\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-sllbk.15e9de0b9ddc6481\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy-sllbk.15e9de0b9ddc6481\",\n                \"uid\": \"b8ae5e88-486d-4572-8f20-1abc44bc883d\",\n                \"resourceVersion\": \"557\",\n                \"creationTimestamp\": \"2020-01-14T21:28:05Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-sllbk\",\n                \"uid\": \"0fcef59d-80b6-4e1f-8478-2e8bfd13129c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"495\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:05Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.15e9de067dc976c2\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15e9de067dc976c2\",\n                \"uid\": \"361a0a47-14bc-49c2-918f-c8a6fde07880\",\n                \"resourceVersion\": \"396\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"206\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-rh967\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:43Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.15e9de0aa6e19567\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15e9de0aa6e19567\",\n                \"uid\": \"10ff5778-4593-43ba-9fd8-f9ddd2ae5408\",\n                \"resourceVersion\": \"491\",\n                \"creationTimestamp\": \"2020-01-14T21:28:01Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:01Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"418\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-sllbk\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:01Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.15e9de0ac39dc532\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-proxy.15e9de0ac39dc532\",\n                \"uid\": \"972428fc-ee08-4650-8865-a6d590fe2c01\",\n                \"resourceVersion\": \"519\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:02Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"498\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-4md69\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"lastTimestamp\": \"2020-01-14T21:28:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.15e9de02afdb81ba\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15e9de02afdb81ba\",\n                \"uid\": \"13a0803f-6f2b-43fe-9228-d0eb610fee46\",\n                \"resourceVersion\": \"159\",\n                \"creationTimestamp\": \"2020-01-14T21:27:27Z\",\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-scheduler\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:27Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:count\": {},\n                            \"f:firstTimestamp\": {},\n                            \"f:involvedObject\": {\n                                \"f:apiVersion\": {},\n                                \"f:kind\": {},\n                                \"f:name\": {},\n                                \"f:namespace\": {},\n                                \"f:resourceVersion\": {},\n                                \"f:uid\": {}\n                            },\n                            \"f:lastTimestamp\": {},\n                            \"f:message\": {},\n                            \"f:reason\": {},\n                            \"f:source\": {\n                                \"f:component\": {}\n                            },\n                            \"f:type\": {}\n                        }\n                    }\n                ]\n            },\n            \"involvedObject\": {\n                \"kind\": \"Endpoints\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"6a64f848-ff18-425d-a71a-d597470c6b69\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"157\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_c401d2bf-7ea3-46fd-a507-5d6babc3a00c became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:27Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.15e9de02afdc0b94\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/events/kube-scheduler.15e9de02afdc0b94\",\n                \"uid\": \"d7876879-591d-4257-b70d-2ded6eaad04c\",\n                \"resourceVersion\": \"160\",\n                \"creationTimestamp\": \"2020-01-14T21:27:27Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"23841b29-47ec-47e5-846b-94d101cd7b53\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"158\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_c401d2bf-7ea3-46fd-a507-5d6babc3a00c became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2020-01-14T21:27:27Z\",\n            \"lastTimestamp\": \"2020-01-14T21:27:27Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/replicationcontrollers\",\n        \"resourceVersion\": \"12192\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/services\",\n        \"resourceVersion\": \"12192\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/services/kube-dns\",\n                \"uid\": \"434fef3f-b258-4ea7-80fc-b4e91e97745d\",\n                \"resourceVersion\": \"187\",\n                \"creationTimestamp\": \"2020-01-14T21:27:28Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"KubeDNS\"\n                },\n                \"annotations\": {\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                },\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubeadm\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:28Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:prometheus.io/port\": {},\n                                    \"f:prometheus.io/scrape\": {},\n                                    \".\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:k8s-app\": {},\n                                    \"f:kubernetes.io/cluster-service\": {},\n                                    \"f:kubernetes.io/name\": {},\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:clusterIP\": {},\n                                \"f:ports\": {\n                                    \"k:{\\\"port\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\": {\n                                        \"f:name\": {},\n                                        \"f:port\": {},\n                                        \"f:protocol\": {},\n                                        \"f:targetPort\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"port\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\": {\n                                        \"f:name\": {},\n                                        \"f:port\": {},\n                                        \"f:protocol\": {},\n                                        \"f:targetPort\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"port\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\": {\n                                        \"f:name\": {},\n                                        \"f:port\": {},\n                                        \"f:protocol\": {},\n                                        \"f:targetPort\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:selector\": {\n                                    \"f:k8s-app\": {},\n                                    \".\": {}\n                                },\n                                \"f:sessionAffinity\": {},\n                                \"f:type\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"10.96.0.10\",\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets\",\n        \"resourceVersion\": \"12192\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kindnet\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\",\n                \"uid\": \"e718f5b5-6f0d-49a2-a4e2-6a6f0b1641a9\",\n                \"resourceVersion\": \"574\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2020-01-14T21:27:30Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"k8s-app\": \"kindnet\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\"\n                },\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubectl\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"apps/v1\",\n                        \"time\": \"2020-01-14T21:27:30Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:deprecated.daemonset.template.generation\": {},\n                                    \".\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:app\": {},\n                                    \"f:k8s-app\": {},\n                                    \"f:tier\": {},\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:revisionHistoryLimit\": {},\n                                \"f:selector\": {\n                                    \"f:matchLabels\": {\n                                        \"f:app\": {},\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:template\": {\n                                    \"f:metadata\": {\n                                        \"f:labels\": {\n                                            \"f:app\": {},\n                                            \"f:k8s-app\": {},\n                                            \"f:tier\": {},\n                                            \".\": {}\n                                        }\n                                    },\n                                    \"f:spec\": {\n                                        \"f:containers\": {\n                                            \"k:{\\\"name\\\":\\\"kindnet-cni\\\"}\": {\n                                                \"f:env\": {\n                                                    \"k:{\\\"name\\\":\\\"HOST_IP\\\"}\": {\n                                                        \"f:name\": {},\n                                                        \"f:valueFrom\": {\n                                                            \"f:fieldRef\": {\n                                                                \"f:apiVersion\": {},\n                                                                \"f:fieldPath\": {},\n                                                                \".\": {}\n                                                            },\n                                                            \".\": {}\n                                                        },\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"name\\\":\\\"POD_IP\\\"}\": {\n                                                        \"f:name\": {},\n                                                        \"f:valueFrom\": {\n                                                            \"f:fieldRef\": {\n                                                                \"f:apiVersion\": {},\n                                                                \"f:fieldPath\": {},\n                                                                \".\": {}\n                                                            },\n                                                            \".\": {}\n                                                        },\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"name\\\":\\\"POD_SUBNET\\\"}\": {\n                                                        \"f:name\": {},\n                                                        \"f:value\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \"f:image\": {},\n                                                \"f:imagePullPolicy\": {},\n                                                \"f:name\": {},\n                                                \"f:resources\": {\n                                                    \"f:limits\": {\n                                                        \"f:cpu\": {},\n                                                        \"f:memory\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:requests\": {\n                                                        \"f:cpu\": {},\n                                                        \"f:memory\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \"f:securityContext\": {\n                                                    \"f:capabilities\": {\n                                                        \"f:add\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:privileged\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:terminationMessagePath\": {},\n                                                \"f:terminationMessagePolicy\": {},\n                                                \"f:volumeMounts\": {\n                                                    \"k:{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\"}\": {\n                                                        \"f:mountPath\": {},\n                                                        \"f:name\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n                                                        \"f:mountPath\": {},\n                                                        \"f:name\": {},\n                                                        \"f:readOnly\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n                                                        \"f:mountPath\": {},\n                                                        \"f:name\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \".\": {}\n                                            }\n                                        },\n                                        \"f:dnsPolicy\": {},\n                                        \"f:hostNetwork\": {},\n                                        \"f:restartPolicy\": {},\n                                        \"f:schedulerName\": {},\n                                        \"f:securityContext\": {},\n                                        \"f:serviceAccount\": {},\n                                        \"f:serviceAccountName\": {},\n                                        \"f:terminationGracePeriodSeconds\": {},\n                                        \"f:tolerations\": {},\n                                        \"f:volumes\": {\n                                            \"k:{\\\"name\\\":\\\"cni-cfg\\\"}\": {\n                                                \"f:hostPath\": {\n                                                    \"f:path\": {},\n                                                    \"f:type\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n                                                \"f:hostPath\": {\n                                                    \"f:path\": {},\n                                                    \"f:type\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n                                                \"f:hostPath\": {\n                                                    \"f:path\": {},\n                                                    \"f:type\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        }\n                                    }\n                                },\n                                \"f:updateStrategy\": {\n                                    \"f:rollingUpdate\": {\n                                        \"f:maxUnavailable\": {},\n                                        \".\": {}\n                                    },\n                                    \"f:type\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"apps/v1\",\n                        \"time\": \"2020-01-14T21:28:06Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:status\": {\n                                \"f:currentNumberScheduled\": {},\n                                \"f:desiredNumberScheduled\": {},\n                                \"f:numberAvailable\": {},\n                                \"f:numberReady\": {},\n                                \"f:observedGeneration\": {},\n                                \"f:updatedNumberScheduled\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"app\": \"kindnet\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"app\": \"kindnet\",\n                            \"k8s-app\": \"kindnet\",\n                            \"tier\": \"node\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/cni/net.d\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kindnet-cni\",\n                                \"image\": \"kindest/kindnetd:0.5.4\",\n                                \"env\": [\n                                    {\n                                        \"name\": \"HOST_IP\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"status.hostIP\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_IP\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"status.podIP\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_SUBNET\",\n                                        \"value\": \"10.244.0.0/16\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"50Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cni-cfg\",\n                                        \"mountPath\": \"/etc/cni/net.d\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_RAW\",\n                                            \"NET_ADMIN\"\n                                        ]\n                                    },\n                                    \"privileged\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"kindnet\",\n                        \"serviceAccount\": \"kindnet\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ]\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 3,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 3,\n                \"numberReady\": 3,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 3,\n                \"numberAvailable\": 3\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\",\n                \"uid\": \"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\",\n                \"resourceVersion\": \"572\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2020-01-14T21:27:28Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\"\n                },\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubeadm\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"apps/v1\",\n                        \"time\": \"2020-01-14T21:27:28Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:deprecated.daemonset.template.generation\": {},\n                                    \".\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:k8s-app\": {},\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:revisionHistoryLimit\": {},\n                                \"f:selector\": {\n                                    \"f:matchLabels\": {\n                                        \"f:k8s-app\": {},\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:template\": {\n                                    \"f:metadata\": {\n                                        \"f:labels\": {\n                                            \"f:k8s-app\": {},\n                                            \".\": {}\n                                        }\n                                    },\n                                    \"f:spec\": {\n                                        \"f:containers\": {\n                                            \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n                                                \"f:command\": {},\n                                                \"f:env\": {\n                                                    \"k:{\\\"name\\\":\\\"NODE_NAME\\\"}\": {\n                                                        \"f:name\": {},\n                                                        \"f:valueFrom\": {\n                                                            \"f:fieldRef\": {\n                                                                \"f:apiVersion\": {},\n                                                                \"f:fieldPath\": {},\n                                                                \".\": {}\n                                                            },\n                                                            \".\": {}\n                                                        },\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \"f:image\": {},\n                                                \"f:imagePullPolicy\": {},\n                                                \"f:name\": {},\n                                                \"f:resources\": {},\n                                                \"f:securityContext\": {\n                                                    \"f:privileged\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:terminationMessagePath\": {},\n                                                \"f:terminationMessagePolicy\": {},\n                                                \"f:volumeMounts\": {\n                                                    \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n                                                        \"f:mountPath\": {},\n                                                        \"f:name\": {},\n                                                        \"f:readOnly\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n                                                        \"f:mountPath\": {},\n                                                        \"f:name\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"mountPath\\\":\\\"/var/lib/kube-proxy\\\"}\": {\n                                                        \"f:mountPath\": {},\n                                                        \"f:name\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \".\": {}\n                                            }\n                                        },\n                                        \"f:dnsPolicy\": {},\n                                        \"f:hostNetwork\": {},\n                                        \"f:nodeSelector\": {\n                                            \"f:beta.kubernetes.io/os\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:priorityClassName\": {},\n                                        \"f:restartPolicy\": {},\n                                        \"f:schedulerName\": {},\n                                        \"f:securityContext\": {},\n                                        \"f:serviceAccount\": {},\n                                        \"f:serviceAccountName\": {},\n                                        \"f:terminationGracePeriodSeconds\": {},\n                                        \"f:tolerations\": {},\n                                        \"f:volumes\": {\n                                            \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n                                                \"f:configMap\": {\n                                                    \"f:defaultMode\": {},\n                                                    \"f:name\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n                                                \"f:hostPath\": {\n                                                    \"f:path\": {},\n                                                    \"f:type\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n                                                \"f:hostPath\": {\n                                                    \"f:path\": {},\n                                                    \"f:type\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        }\n                                    }\n                                },\n                                \"f:updateStrategy\": {\n                                    \"f:rollingUpdate\": {\n                                        \"f:maxUnavailable\": {},\n                                        \".\": {}\n                                    },\n                                    \"f:type\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"apps/v1\",\n                        \"time\": \"2020-01-14T21:28:06Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:status\": {\n                                \"f:currentNumberScheduled\": {},\n                                \"f:desiredNumberScheduled\": {},\n                                \"f:numberAvailable\": {},\n                                \"f:numberReady\": {},\n                                \"f:observedGeneration\": {},\n                                \"f:updatedNumberScheduled\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-proxy\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-proxy\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"configMap\": {\n                                    \"name\": \"kube-proxy\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                                \"command\": [\n                                    \"/usr/local/bin/kube-proxy\",\n                                    \"--config=/var/lib/kube-proxy/config.conf\",\n                                    \"--hostname-override=$(NODE_NAME)\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    }\n                                ],\n                                \"resources\": {},\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kube-proxy\",\n                                        \"mountPath\": \"/var/lib/kube-proxy\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"privileged\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"kube-proxy\",\n                        \"serviceAccount\": \"kube-proxy\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 3,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 3,\n                \"numberReady\": 3,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 3,\n                \"numberAvailable\": 3\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments\",\n        \"resourceVersion\": \"12192\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/deployments/coredns\",\n                \"uid\": \"692ea41e-e15a-43e1-bb0e-4911945a514a\",\n                \"resourceVersion\": \"596\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2020-01-14T21:27:28Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubeadm\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"apps/v1\",\n                        \"time\": \"2020-01-14T21:27:28Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:labels\": {\n                                    \"f:k8s-app\": {},\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:progressDeadlineSeconds\": {},\n                                \"f:replicas\": {},\n                                \"f:revisionHistoryLimit\": {},\n                                \"f:selector\": {\n                                    \"f:matchLabels\": {\n                                        \"f:k8s-app\": {},\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:strategy\": {\n                                    \"f:rollingUpdate\": {\n                                        \"f:maxSurge\": {},\n                                        \"f:maxUnavailable\": {},\n                                        \".\": {}\n                                    },\n                                    \"f:type\": {}\n                                },\n                                \"f:template\": {\n                                    \"f:metadata\": {\n                                        \"f:labels\": {\n                                            \"f:k8s-app\": {},\n                                            \".\": {}\n                                        }\n                                    },\n                                    \"f:spec\": {\n                                        \"f:containers\": {\n                                            \"k:{\\\"name\\\":\\\"coredns\\\"}\": {\n                                                \"f:args\": {},\n                                                \"f:image\": {},\n                                                \"f:imagePullPolicy\": {},\n                                                \"f:livenessProbe\": {\n                                                    \"f:failureThreshold\": {},\n                                                    \"f:httpGet\": {\n                                                        \"f:path\": {},\n                                                        \"f:port\": {},\n                                                        \"f:scheme\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:initialDelaySeconds\": {},\n                                                    \"f:periodSeconds\": {},\n                                                    \"f:successThreshold\": {},\n                                                    \"f:timeoutSeconds\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \"f:ports\": {\n                                                    \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\": {\n                                                        \"f:containerPort\": {},\n                                                        \"f:name\": {},\n                                                        \"f:protocol\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\": {\n                                                        \"f:containerPort\": {},\n                                                        \"f:name\": {},\n                                                        \"f:protocol\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\": {\n                                                        \"f:containerPort\": {},\n                                                        \"f:name\": {},\n                                                        \"f:protocol\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \"f:readinessProbe\": {\n                                                    \"f:failureThreshold\": {},\n                                                    \"f:httpGet\": {\n                                                        \"f:path\": {},\n                                                        \"f:port\": {},\n                                                        \"f:scheme\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:periodSeconds\": {},\n                                                    \"f:successThreshold\": {},\n                                                    \"f:timeoutSeconds\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:resources\": {\n                                                    \"f:limits\": {\n                                                        \"f:memory\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:requests\": {\n                                                        \"f:cpu\": {},\n                                                        \"f:memory\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \"f:securityContext\": {\n                                                    \"f:allowPrivilegeEscalation\": {},\n                                                    \"f:capabilities\": {\n                                                        \"f:add\": {},\n                                                        \"f:drop\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:readOnlyRootFilesystem\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:terminationMessagePath\": {},\n                                                \"f:terminationMessagePolicy\": {},\n                                                \"f:volumeMounts\": {\n                                                    \"k:{\\\"mountPath\\\":\\\"/etc/coredns\\\"}\": {\n                                                        \"f:mountPath\": {},\n                                                        \"f:name\": {},\n                                                        \"f:readOnly\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \".\": {}\n                                            }\n                                        },\n                                        \"f:dnsPolicy\": {},\n                                        \"f:nodeSelector\": {\n                                            \"f:beta.kubernetes.io/os\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:priorityClassName\": {},\n                                        \"f:restartPolicy\": {},\n                                        \"f:schedulerName\": {},\n                                        \"f:securityContext\": {},\n                                        \"f:serviceAccount\": {},\n                                        \"f:serviceAccountName\": {},\n                                        \"f:terminationGracePeriodSeconds\": {},\n                                        \"f:tolerations\": {},\n                                        \"f:volumes\": {\n                                            \"k:{\\\"name\\\":\\\"config-volume\\\"}\": {\n                                                \"f:configMap\": {\n                                                    \"f:defaultMode\": {},\n                                                    \"f:items\": {},\n                                                    \"f:name\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        }\n                                    }\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"apps/v1\",\n                        \"time\": \"2020-01-14T21:28:11Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:deployment.kubernetes.io/revision\": {},\n                                    \".\": {}\n                                }\n                            },\n                            \"f:status\": {\n                                \"f:availableReplicas\": {},\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"Available\\\"}\": {\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:lastUpdateTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Progressing\\\"}\": {\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:lastUpdateTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:observedGeneration\": {},\n                                \"f:readyReplicas\": {},\n                                \"f:replicas\": {},\n                                \"f:updatedReplicas\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2020-01-14T21:28:10Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:28:10Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2020-01-14T21:28:11Z\",\n                        \"lastTransitionTime\": \"2020-01-14T21:27:43Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-6955765f44\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets\",\n        \"resourceVersion\": \"12193\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/apis/apps/v1/namespaces/kube-system/replicasets/coredns-6955765f44\",\n                \"uid\": \"2f78164f-7a71-4604-9ac5-462c80cae439\",\n                \"resourceVersion\": \"594\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"6955765f44\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"692ea41e-e15a-43e1-bb0e-4911945a514a\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ],\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"apps/v1\",\n                        \"time\": \"2020-01-14T21:28:11Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:deployment.kubernetes.io/desired-replicas\": {},\n                                    \"f:deployment.kubernetes.io/max-replicas\": {},\n                                    \"f:deployment.kubernetes.io/revision\": {},\n                                    \".\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:k8s-app\": {},\n                                    \"f:pod-template-hash\": {},\n                                    \".\": {}\n                                },\n                                \"f:ownerReferences\": {\n                                    \"k:{\\\"uid\\\":\\\"692ea41e-e15a-43e1-bb0e-4911945a514a\\\"}\": {\n                                        \"f:apiVersion\": {},\n                                        \"f:blockOwnerDeletion\": {},\n                                        \"f:controller\": {},\n                                        \"f:kind\": {},\n                                        \"f:name\": {},\n                                        \"f:uid\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:replicas\": {},\n                                \"f:selector\": {\n                                    \"f:matchLabels\": {\n                                        \"f:k8s-app\": {},\n                                        \"f:pod-template-hash\": {},\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:template\": {\n                                    \"f:metadata\": {\n                                        \"f:labels\": {\n                                            \"f:k8s-app\": {},\n                                            \"f:pod-template-hash\": {},\n                                            \".\": {}\n                                        }\n                                    },\n                                    \"f:spec\": {\n                                        \"f:containers\": {\n                                            \"k:{\\\"name\\\":\\\"coredns\\\"}\": {\n                                                \"f:args\": {},\n                                                \"f:image\": {},\n                                                \"f:imagePullPolicy\": {},\n                                                \"f:livenessProbe\": {\n                                                    \"f:failureThreshold\": {},\n                                                    \"f:httpGet\": {\n                                                        \"f:path\": {},\n                                                        \"f:port\": {},\n                                                        \"f:scheme\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:initialDelaySeconds\": {},\n                                                    \"f:periodSeconds\": {},\n                                                    \"f:successThreshold\": {},\n                                                    \"f:timeoutSeconds\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \"f:ports\": {\n                                                    \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\": {\n                                                        \"f:containerPort\": {},\n                                                        \"f:name\": {},\n                                                        \"f:protocol\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\": {\n                                                        \"f:containerPort\": {},\n                                                        \"f:name\": {},\n                                                        \"f:protocol\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"k:{\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\": {\n                                                        \"f:containerPort\": {},\n                                                        \"f:name\": {},\n                                                        \"f:protocol\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \"f:readinessProbe\": {\n                                                    \"f:failureThreshold\": {},\n                                                    \"f:httpGet\": {\n                                                        \"f:path\": {},\n                                                        \"f:port\": {},\n                                                        \"f:scheme\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:periodSeconds\": {},\n                                                    \"f:successThreshold\": {},\n                                                    \"f:timeoutSeconds\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:resources\": {\n                                                    \"f:limits\": {\n                                                        \"f:memory\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:requests\": {\n                                                        \"f:cpu\": {},\n                                                        \"f:memory\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \"f:securityContext\": {\n                                                    \"f:allowPrivilegeEscalation\": {},\n                                                    \"f:capabilities\": {\n                                                        \"f:add\": {},\n                                                        \"f:drop\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \"f:readOnlyRootFilesystem\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:terminationMessagePath\": {},\n                                                \"f:terminationMessagePolicy\": {},\n                                                \"f:volumeMounts\": {\n                                                    \"k:{\\\"mountPath\\\":\\\"/etc/coredns\\\"}\": {\n                                                        \"f:mountPath\": {},\n                                                        \"f:name\": {},\n                                                        \"f:readOnly\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \".\": {}\n                                            }\n                                        },\n                                        \"f:dnsPolicy\": {},\n                                        \"f:nodeSelector\": {\n                                            \"f:beta.kubernetes.io/os\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:priorityClassName\": {},\n                                        \"f:restartPolicy\": {},\n                                        \"f:schedulerName\": {},\n                                        \"f:securityContext\": {},\n                                        \"f:serviceAccount\": {},\n                                        \"f:serviceAccountName\": {},\n                                        \"f:terminationGracePeriodSeconds\": {},\n                                        \"f:tolerations\": {},\n                                        \"f:volumes\": {\n                                            \"k:{\\\"name\\\":\\\"config-volume\\\"}\": {\n                                                \"f:configMap\": {\n                                                    \"f:defaultMode\": {},\n                                                    \"f:items\": {},\n                                                    \"f:name\": {},\n                                                    \".\": {}\n                                                },\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        }\n                                    }\n                                }\n                            },\n                            \"f:status\": {\n                                \"f:availableReplicas\": {},\n                                \"f:fullyLabeledReplicas\": {},\n                                \"f:observedGeneration\": {},\n                                \"f:readyReplicas\": {},\n                                \"f:replicas\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"6955765f44\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"6955765f44\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"beta.kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/namespaces/kube-system/pods\",\n        \"resourceVersion\": \"12194\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-45qgf\",\n                \"generateName\": \"coredns-6955765f44-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-6955765f44-45qgf\",\n                \"uid\": \"0f458487-a615-4729-92be-1253a0e4a65b\",\n                \"resourceVersion\": \"586\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"6955765f44\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-6955765f44\",\n                        \"uid\": \"2f78164f-7a71-4604-9ac5-462c80cae439\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"coredns-token-6skql\",\n                        \"secret\": {\n                            \"secretName\": \"coredns-token-6skql\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"coredns-token-6skql\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"kind-control-plane\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:00Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:10Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:10Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:00Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"10.244.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"10.244.0.4\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:28:00Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:28:02Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"imageID\": \"sha256:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61\",\n                        \"containerID\": \"containerd://20a92704a40a23077419044536c77032e2cc5c48b282e644d5a6d0d2859dc941\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-6955765f44-blnrh\",\n                \"generateName\": \"coredns-6955765f44-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/coredns-6955765f44-blnrh\",\n                \"uid\": \"12c36094-4dbc-4d46-9e3f-0978bc0f3639\",\n                \"resourceVersion\": \"593\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"6955765f44\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-6955765f44\",\n                        \"uid\": \"2f78164f-7a71-4604-9ac5-462c80cae439\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ],\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:43Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:generateName\": {},\n                                \"f:labels\": {\n                                    \"f:k8s-app\": {},\n                                    \"f:pod-template-hash\": {},\n                                    \".\": {}\n                                },\n                                \"f:ownerReferences\": {\n                                    \"k:{\\\"uid\\\":\\\"2f78164f-7a71-4604-9ac5-462c80cae439\\\"}\": {\n                                        \"f:apiVersion\": {},\n                                        \"f:blockOwnerDeletion\": {},\n                                        \"f:controller\": {},\n                                        \"f:kind\": {},\n                                        \"f:name\": {},\n                                        \"f:uid\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:containers\": {\n                                    \"k:{\\\"name\\\":\\\"coredns\\\"}\": {\n                                        \"f:args\": {},\n                                        \"f:image\": {},\n                                        \"f:imagePullPolicy\": {},\n                                        \"f:livenessProbe\": {\n                                            \"f:failureThreshold\": {},\n                                            \"f:httpGet\": {\n                                                \"f:path\": {},\n                                                \"f:port\": {},\n                                                \"f:scheme\": {},\n                                                \".\": {}\n                                            },\n                                            \"f:initialDelaySeconds\": {},\n                                            \"f:periodSeconds\": {},\n                                            \"f:successThreshold\": {},\n                                            \"f:timeoutSeconds\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \"f:ports\": {\n                                            \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"}\": {\n                                                \"f:containerPort\": {},\n                                                \"f:name\": {},\n                                                \"f:protocol\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"}\": {\n                                                \"f:containerPort\": {},\n                                                \"f:name\": {},\n                                                \"f:protocol\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}\": {\n                                                \"f:containerPort\": {},\n                                                \"f:name\": {},\n                                                \"f:protocol\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \"f:readinessProbe\": {\n                                            \"f:failureThreshold\": {},\n                                            \"f:httpGet\": {\n                                                \"f:path\": {},\n                                                \"f:port\": {},\n                                                \"f:scheme\": {},\n                                                \".\": {}\n                                            },\n                                            \"f:periodSeconds\": {},\n                                            \"f:successThreshold\": {},\n                                            \"f:timeoutSeconds\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:resources\": {\n                                            \"f:limits\": {\n                                                \"f:memory\": {},\n                                                \".\": {}\n                                            },\n                                            \"f:requests\": {\n                                                \"f:cpu\": {},\n                                                \"f:memory\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \"f:securityContext\": {\n                                            \"f:allowPrivilegeEscalation\": {},\n                                            \"f:capabilities\": {\n                                                \"f:add\": {},\n                                                \"f:drop\": {},\n                                                \".\": {}\n                                            },\n                                            \"f:readOnlyRootFilesystem\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:terminationMessagePath\": {},\n                                        \"f:terminationMessagePolicy\": {},\n                                        \"f:volumeMounts\": {\n                                            \"k:{\\\"mountPath\\\":\\\"/etc/coredns\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:dnsPolicy\": {},\n                                \"f:enableServiceLinks\": {},\n                                \"f:nodeSelector\": {\n                                    \"f:beta.kubernetes.io/os\": {},\n                                    \".\": {}\n                                },\n                                \"f:priority\": {},\n                                \"f:priorityClassName\": {},\n                                \"f:restartPolicy\": {},\n                                \"f:schedulerName\": {},\n                                \"f:securityContext\": {},\n                                \"f:serviceAccount\": {},\n                                \"f:serviceAccountName\": {},\n                                \"f:terminationGracePeriodSeconds\": {},\n                                \"f:tolerations\": {},\n                                \"f:volumes\": {\n                                    \"k:{\\\"name\\\":\\\"config-volume\\\"}\": {\n                                        \"f:configMap\": {\n                                            \"f:defaultMode\": {},\n                                            \"f:items\": {},\n                                            \"f:name\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"coredns-token-6skql\\\"}\": {\n                                        \"f:name\": {},\n                                        \"f:secret\": {\n                                            \"f:secretName\": {},\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kube-scheduler\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:43Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:status\": {\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"PodScheduled\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:11Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:status\": {\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:containerStatuses\": {},\n                                \"f:hostIP\": {},\n                                \"f:phase\": {},\n                                \"f:podIP\": {},\n                                \"f:podIPs\": {\n                                    \"k:{\\\"ip\\\":\\\"10.244.0.3\\\"}\": {\n                                        \"f:ip\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:startTime\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"coredns-token-6skql\",\n                        \"secret\": {\n                            \"secretName\": \"coredns-token-6skql\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"coredns-token-6skql\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"kind-control-plane\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:00Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:11Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:11Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:00Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"10.244.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"10.244.0.3\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:28:00Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:28:02Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns:1.6.5\",\n                        \"imageID\": \"sha256:70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61\",\n                        \"containerID\": \"containerd://f98e3a338f1cd1fe099cddb37a8a1f109326ac6726e23e4d4d1bd9c3786594ce\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/etcd-kind-control-plane\",\n                \"uid\": \"7c6cf293-b8ba-40b2-a8b4-523ae8498dee\",\n                \"resourceVersion\": \"243\",\n                \"creationTimestamp\": \"2020-01-14T21:27:29Z\",\n                \"labels\": {\n                    \"component\": \"etcd\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"27a865fad340b7665315c8f924f7c35c\",\n                    \"kubernetes.io/config.mirror\": \"27a865fad340b7665315c8f924f7c35c\",\n                    \"kubernetes.io/config.seen\": \"2020-01-14T21:27:28.46251902Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"463295e6-ff31-4747-8659-7f8e155ce671\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"etcd-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcd-data\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/etcd\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd\",\n                        \"image\": \"k8s.gcr.io/etcd:3.4.3-0\",\n                        \"command\": [\n                            \"etcd\",\n                            \"--advertise-client-urls=https://172.17.0.2:2379\",\n                            \"--cert-file=/etc/kubernetes/pki/etcd/server.crt\",\n                            \"--client-cert-auth=true\",\n                            \"--data-dir=/var/lib/etcd\",\n                            \"--initial-advertise-peer-urls=https://172.17.0.2:2380\",\n                            \"--initial-cluster=kind-control-plane=https://172.17.0.2:2380\",\n                            \"--key-file=/etc/kubernetes/pki/etcd/server.key\",\n                            \"--listen-client-urls=https://127.0.0.1:2379,https://172.17.0.2:2379\",\n                            \"--listen-metrics-urls=http://127.0.0.1:2381\",\n                            \"--listen-peer-urls=https://172.17.0.2:2380\",\n                            \"--name=kind-control-plane\",\n                            \"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt\",\n                            \"--peer-client-cert-auth=true\",\n                            \"--peer-key-file=/etc/kubernetes/pki/etcd/peer.key\",\n                            \"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\",\n                            \"--snapshot-count=10000\",\n                            \"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\"\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"etcd-data\",\n                                \"mountPath\": \"/var/lib/etcd\"\n                            },\n                            {\n                                \"name\": \"etcd-certs\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 2381,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:27:28Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:27:20Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcd:3.4.3-0\",\n                        \"imageID\": \"sha256:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f\",\n                        \"containerID\": \"containerd://a2c9962f494b46eb4b244fa334126dbc812eff10fa1178f961578eea327762cd\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-2hf8t\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-2hf8t\",\n                \"uid\": \"837d47ca-53f0-4316-9c5b-ecc566cad451\",\n                \"resourceVersion\": \"433\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"5b955bbc76\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"e718f5b5-6f0d-49a2-a4e2-6a6f0b1641a9\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kindnet-token-k2xwr\",\n                        \"secret\": {\n                            \"secretName\": \"kindnet-token-k2xwr\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:0.5.4\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"10.244.0.0/16\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kindnet-token-k2xwr\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-control-plane\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:43Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:47Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:47Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:43Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:27:43Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:27:46Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/kindest/kindnetd:0.5.4\",\n                        \"imageID\": \"sha256:2186a1a396deb58f1ea5eaf20193a518ca05049b46ccd754ec83366b5c8c13d5\",\n                        \"containerID\": \"containerd://5be300544f5a8834f5d391a7d2933007dce36e9c8497a204edc478615f48d6b9\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6rhkp\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-6rhkp\",\n                \"uid\": \"686881ea-3f52-4876-b501-f2be4e700242\",\n                \"resourceVersion\": \"573\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"5b955bbc76\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"e718f5b5-6f0d-49a2-a4e2-6a6f0b1641a9\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ],\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:02Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:generateName\": {},\n                                \"f:labels\": {\n                                    \"f:app\": {},\n                                    \"f:controller-revision-hash\": {},\n                                    \"f:k8s-app\": {},\n                                    \"f:pod-template-generation\": {},\n                                    \"f:tier\": {},\n                                    \".\": {}\n                                },\n                                \"f:ownerReferences\": {\n                                    \"k:{\\\"uid\\\":\\\"e718f5b5-6f0d-49a2-a4e2-6a6f0b1641a9\\\"}\": {\n                                        \"f:apiVersion\": {},\n                                        \"f:blockOwnerDeletion\": {},\n                                        \"f:controller\": {},\n                                        \"f:kind\": {},\n                                        \"f:name\": {},\n                                        \"f:uid\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:affinity\": {\n                                    \"f:nodeAffinity\": {\n                                        \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n                                            \"f:nodeSelectorTerms\": {},\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:containers\": {\n                                    \"k:{\\\"name\\\":\\\"kindnet-cni\\\"}\": {\n                                        \"f:env\": {\n                                            \"k:{\\\"name\\\":\\\"HOST_IP\\\"}\": {\n                                                \"f:name\": {},\n                                                \"f:valueFrom\": {\n                                                    \"f:fieldRef\": {\n                                                        \"f:apiVersion\": {},\n                                                        \"f:fieldPath\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"name\\\":\\\"POD_IP\\\"}\": {\n                                                \"f:name\": {},\n                                                \"f:valueFrom\": {\n                                                    \"f:fieldRef\": {\n                                                        \"f:apiVersion\": {},\n                                                        \"f:fieldPath\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"name\\\":\\\"POD_SUBNET\\\"}\": {\n                                                \"f:name\": {},\n                                                \"f:value\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \"f:image\": {},\n                                        \"f:imagePullPolicy\": {},\n                                        \"f:name\": {},\n                                        \"f:resources\": {\n                                            \"f:limits\": {\n                                                \"f:cpu\": {},\n                                                \"f:memory\": {},\n                                                \".\": {}\n                                            },\n                                            \"f:requests\": {\n                                                \"f:cpu\": {},\n                                                \"f:memory\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \"f:securityContext\": {\n                                            \"f:capabilities\": {\n                                                \"f:add\": {},\n                                                \".\": {}\n                                            },\n                                            \"f:privileged\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:terminationMessagePath\": {},\n                                        \"f:terminationMessagePolicy\": {},\n                                        \"f:volumeMounts\": {\n                                            \"k:{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:dnsPolicy\": {},\n                                \"f:enableServiceLinks\": {},\n                                \"f:hostNetwork\": {},\n                                \"f:priority\": {},\n                                \"f:restartPolicy\": {},\n                                \"f:schedulerName\": {},\n                                \"f:securityContext\": {},\n                                \"f:serviceAccount\": {},\n                                \"f:serviceAccountName\": {},\n                                \"f:terminationGracePeriodSeconds\": {},\n                                \"f:tolerations\": {},\n                                \"f:volumes\": {\n                                    \"k:{\\\"name\\\":\\\"cni-cfg\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"kindnet-token-k2xwr\\\"}\": {\n                                        \"f:name\": {},\n                                        \"f:secret\": {\n                                            \"f:secretName\": {},\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:06Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:status\": {\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:containerStatuses\": {},\n                                \"f:hostIP\": {},\n                                \"f:phase\": {},\n                                \"f:podIP\": {},\n                                \"f:podIPs\": {\n                                    \"k:{\\\"ip\\\":\\\"172.17.0.4\\\"}\": {\n                                        \"f:ip\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:startTime\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kindnet-token-k2xwr\",\n                        \"secret\": {\n                            \"secretName\": \"kindnet-token-k2xwr\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:0.5.4\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"10.244.0.0/16\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kindnet-token-k2xwr\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-worker\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:02Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:06Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:06Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:02Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:28:02Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:28:06Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/kindest/kindnetd:0.5.4\",\n                        \"imageID\": \"sha256:2186a1a396deb58f1ea5eaf20193a518ca05049b46ccd754ec83366b5c8c13d5\",\n                        \"containerID\": \"containerd://ae1860ab40112b68de586e4e10016efc98395b3fb55b4f268563b22e481755ea\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-jxzbl\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kindnet-jxzbl\",\n                \"uid\": \"ab6159d6-619a-41db-a816-0600a87923a8\",\n                \"resourceVersion\": \"567\",\n                \"creationTimestamp\": \"2020-01-14T21:28:01Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"5b955bbc76\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"e718f5b5-6f0d-49a2-a4e2-6a6f0b1641a9\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kindnet-token-k2xwr\",\n                        \"secret\": {\n                            \"secretName\": \"kindnet-token-k2xwr\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:0.5.4\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"10.244.0.0/16\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kindnet-token-k2xwr\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-worker2\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker2\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:06Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:06Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:28:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:28:06Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/kindest/kindnetd:0.5.4\",\n                        \"imageID\": \"sha256:2186a1a396deb58f1ea5eaf20193a518ca05049b46ccd754ec83366b5c8c13d5\",\n                        \"containerID\": \"containerd://472c08a6fb04d1712c9005017e4fe4ceff3ca2ca4eeddd696f48d5a97475858a\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-apiserver-kind-control-plane\",\n                \"uid\": \"dabd96ff-8317-49b8-89f1-6c1f1c743a5d\",\n                \"resourceVersion\": \"247\",\n                \"creationTimestamp\": \"2020-01-14T21:27:29Z\",\n                \"labels\": {\n                    \"component\": \"kube-apiserver\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"5fdc52a80367275a6883bb5c8317813c\",\n                    \"kubernetes.io/config.mirror\": \"5fdc52a80367275a6883bb5c8317813c\",\n                    \"kubernetes.io/config.seen\": \"2020-01-14T21:27:28.462521311Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"463295e6-ff31-4747-8659-7f8e155ce671\",\n                        \"controller\": true\n                    }\n                ],\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:30Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:kubernetes.io/config.hash\": {},\n                                    \"f:kubernetes.io/config.mirror\": {},\n                                    \"f:kubernetes.io/config.seen\": {},\n                                    \"f:kubernetes.io/config.source\": {},\n                                    \".\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:component\": {},\n                                    \"f:tier\": {},\n                                    \".\": {}\n                                },\n                                \"f:ownerReferences\": {\n                                    \"k:{\\\"uid\\\":\\\"463295e6-ff31-4747-8659-7f8e155ce671\\\"}\": {\n                                        \"f:apiVersion\": {},\n                                        \"f:controller\": {},\n                                        \"f:kind\": {},\n                                        \"f:name\": {},\n                                        \"f:uid\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:containers\": {\n                                    \"k:{\\\"name\\\":\\\"kube-apiserver\\\"}\": {\n                                        \"f:command\": {},\n                                        \"f:image\": {},\n                                        \"f:imagePullPolicy\": {},\n                                        \"f:livenessProbe\": {\n                                            \"f:failureThreshold\": {},\n                                            \"f:httpGet\": {\n                                                \"f:host\": {},\n                                                \"f:path\": {},\n                                                \"f:port\": {},\n                                                \"f:scheme\": {},\n                                                \".\": {}\n                                            },\n                                            \"f:initialDelaySeconds\": {},\n                                            \"f:periodSeconds\": {},\n                                            \"f:successThreshold\": {},\n                                            \"f:timeoutSeconds\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \"f:resources\": {\n                                            \"f:requests\": {\n                                                \"f:cpu\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \"f:terminationMessagePath\": {},\n                                        \"f:terminationMessagePolicy\": {},\n                                        \"f:volumeMounts\": {\n                                            \"k:{\\\"mountPath\\\":\\\"/etc/ca-certificates\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/etc/kubernetes/pki\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/usr/local/share/ca-certificates\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/usr/share/ca-certificates\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:dnsPolicy\": {},\n                                \"f:enableServiceLinks\": {},\n                                \"f:hostNetwork\": {},\n                                \"f:nodeName\": {},\n                                \"f:priority\": {},\n                                \"f:priorityClassName\": {},\n                                \"f:restartPolicy\": {},\n                                \"f:schedulerName\": {},\n                                \"f:securityContext\": {},\n                                \"f:terminationGracePeriodSeconds\": {},\n                                \"f:tolerations\": {},\n                                \"f:volumes\": {\n                                    \"k:{\\\"name\\\":\\\"ca-certs\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"etc-ca-certificates\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"k8s-certs\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"usr-local-share-ca-certificates\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"usr-share-ca-certificates\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            },\n                            \"f:status\": {\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"PodScheduled\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:containerStatuses\": {},\n                                \"f:hostIP\": {},\n                                \"f:phase\": {},\n                                \"f:podIP\": {},\n                                \"f:podIPs\": {\n                                    \"k:{\\\"ip\\\":\\\"172.17.0.2\\\"}\": {\n                                        \"f:ip\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:startTime\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"ca-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl/certs\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"k8s-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-local-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"command\": [\n                            \"kube-apiserver\",\n                            \"--advertise-address=172.17.0.2\",\n                            \"--allow-privileged=true\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--enable-admission-plugins=NodeRestriction\",\n                            \"--enable-bootstrap-token-auth=true\",\n                            \"--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt\",\n                            \"--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt\",\n                            \"--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key\",\n                            \"--etcd-servers=https://127.0.0.1:2379\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt\",\n                            \"--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\",\n                            \"--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt\",\n                            \"--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key\",\n                            \"--requestheader-allowed-names=front-proxy-client\",\n                            \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--secure-port=6443\",\n                            \"--service-account-key-file=/etc/kubernetes/pki/sa.pub\",\n                            \"--service-cluster-ip-range=10.96.0.0/12\",\n                            \"--tls-cert-file=/etc/kubernetes/pki/apiserver.crt\",\n                            \"--tls-private-key-file=/etc/kubernetes/pki/apiserver.key\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"250m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"ca-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"etc-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"k8s-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/pki\"\n                            },\n                            {\n                                \"name\": \"usr-local-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/share/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"usr-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ca-certificates\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 6443,\n                                \"host\": \"172.17.0.2\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:27:28Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:27:20Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-apiserver:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"imageID\": \"sha256:1cf18226cf249ac79dc7ad287210db0e837b0dcfef705bf3dd4ab4c7b2460f77\",\n                        \"containerID\": \"containerd://813cdd9adbe88a2834395e4219745d20e21085419495ff2c8da7b5de75ca2ad5\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-controller-manager-kind-control-plane\",\n                \"uid\": \"197b6340-b649-4a1c-9c16-d041f3c39eba\",\n                \"resourceVersion\": \"263\",\n                \"creationTimestamp\": \"2020-01-14T21:27:28Z\",\n                \"labels\": {\n                    \"component\": \"kube-controller-manager\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"d0072be0932507fc0fec6e0bbb9fec2c\",\n                    \"kubernetes.io/config.mirror\": \"d0072be0932507fc0fec6e0bbb9fec2c\",\n                    \"kubernetes.io/config.seen\": \"2020-01-14T21:27:28.46251015Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"463295e6-ff31-4747-8659-7f8e155ce671\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"ca-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl/certs\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"flexvolume-dir\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"k8s-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/controller-manager.conf\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-local-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"command\": [\n                            \"kube-controller-manager\",\n                            \"--allocate-node-cidrs=true\",\n                            \"--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--bind-address=127.0.0.1\",\n                            \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--cluster-cidr=10.244.0.0/16\",\n                            \"--cluster-name=kind\",\n                            \"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--cluster-signing-key-file=/etc/kubernetes/pki/ca.key\",\n                            \"--controllers=*,bootstrapsigner,tokencleaner\",\n                            \"--enable-hostpath-provisioner=true\",\n                            \"--kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--leader-elect=true\",\n                            \"--node-cidr-mask-size=24\",\n                            \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n                            \"--root-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--service-account-private-key-file=/etc/kubernetes/pki/sa.key\",\n                            \"--service-cluster-ip-range=10.96.0.0/12\",\n                            \"--use-service-account-credentials=true\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"ca-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"etc-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"flexvolume-dir\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\"\n                            },\n                            {\n                                \"name\": \"k8s-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/pki\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/controller-manager.conf\"\n                            },\n                            {\n                                \"name\": \"usr-local-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/share/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"usr-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ca-certificates\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10257,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:27:28Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:27:20Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"imageID\": \"sha256:50d76b99bc856456305b63f2be7dd10d21dadbf109a0698cd6400a40f14c9ac8\",\n                        \"containerID\": \"containerd://0604f13de79bb3abe031d0d46860a619b56a38f7c1a664c9f29aa2894511b662\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-4md69\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-4md69\",\n                \"uid\": \"37fe8706-2809-4c8e-a140-851c9064adb1\",\n                \"resourceVersion\": \"571\",\n                \"creationTimestamp\": \"2020-01-14T21:28:02Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"77b478d68\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ],\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:02Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:generateName\": {},\n                                \"f:labels\": {\n                                    \"f:controller-revision-hash\": {},\n                                    \"f:k8s-app\": {},\n                                    \"f:pod-template-generation\": {},\n                                    \".\": {}\n                                },\n                                \"f:ownerReferences\": {\n                                    \"k:{\\\"uid\\\":\\\"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\\\"}\": {\n                                        \"f:apiVersion\": {},\n                                        \"f:blockOwnerDeletion\": {},\n                                        \"f:controller\": {},\n                                        \"f:kind\": {},\n                                        \"f:name\": {},\n                                        \"f:uid\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:affinity\": {\n                                    \"f:nodeAffinity\": {\n                                        \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n                                            \"f:nodeSelectorTerms\": {},\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:containers\": {\n                                    \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n                                        \"f:command\": {},\n                                        \"f:env\": {\n                                            \"k:{\\\"name\\\":\\\"NODE_NAME\\\"}\": {\n                                                \"f:name\": {},\n                                                \"f:valueFrom\": {\n                                                    \"f:fieldRef\": {\n                                                        \"f:apiVersion\": {},\n                                                        \"f:fieldPath\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \"f:image\": {},\n                                        \"f:imagePullPolicy\": {},\n                                        \"f:name\": {},\n                                        \"f:resources\": {},\n                                        \"f:securityContext\": {\n                                            \"f:privileged\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:terminationMessagePath\": {},\n                                        \"f:terminationMessagePolicy\": {},\n                                        \"f:volumeMounts\": {\n                                            \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/var/lib/kube-proxy\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:dnsPolicy\": {},\n                                \"f:enableServiceLinks\": {},\n                                \"f:hostNetwork\": {},\n                                \"f:nodeSelector\": {\n                                    \"f:beta.kubernetes.io/os\": {},\n                                    \".\": {}\n                                },\n                                \"f:priority\": {},\n                                \"f:priorityClassName\": {},\n                                \"f:restartPolicy\": {},\n                                \"f:schedulerName\": {},\n                                \"f:securityContext\": {},\n                                \"f:serviceAccount\": {},\n                                \"f:serviceAccountName\": {},\n                                \"f:terminationGracePeriodSeconds\": {},\n                                \"f:tolerations\": {},\n                                \"f:volumes\": {\n                                    \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n                                        \"f:configMap\": {\n                                            \"f:defaultMode\": {},\n                                            \"f:name\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"kube-proxy-token-g5vsq\\\"}\": {\n                                        \"f:name\": {},\n                                        \"f:secret\": {\n                                            \"f:secretName\": {},\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:06Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:status\": {\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:containerStatuses\": {},\n                                \"f:hostIP\": {},\n                                \"f:phase\": {},\n                                \"f:podIP\": {},\n                                \"f:podIPs\": {\n                                    \"k:{\\\"ip\\\":\\\"172.17.0.4\\\"}\": {\n                                        \"f:ip\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:startTime\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-proxy-token-g5vsq\",\n                        \"secret\": {\n                            \"secretName\": \"kube-proxy-token-g5vsq\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-proxy-token-g5vsq\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-worker\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:02Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:06Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:06Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:02Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.4\",\n                \"podIP\": \"172.17.0.4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.4\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:28:02Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:28:05Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"imageID\": \"sha256:f3319968dae006eadfe699d5120c818675e12f7e179d3b6b27f501eac5fbc314\",\n                        \"containerID\": \"containerd://d05265d3042b3cd0264d33c887aa21faeef02da628a767090afda25e30691b1e\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-rh967\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-rh967\",\n                \"uid\": \"ac5c939c-0083-48fe-bdc2-c223e5b29142\",\n                \"resourceVersion\": \"417\",\n                \"creationTimestamp\": \"2020-01-14T21:27:43Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"77b478d68\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ],\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:43Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:generateName\": {},\n                                \"f:labels\": {\n                                    \"f:controller-revision-hash\": {},\n                                    \"f:k8s-app\": {},\n                                    \"f:pod-template-generation\": {},\n                                    \".\": {}\n                                },\n                                \"f:ownerReferences\": {\n                                    \"k:{\\\"uid\\\":\\\"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\\\"}\": {\n                                        \"f:apiVersion\": {},\n                                        \"f:blockOwnerDeletion\": {},\n                                        \"f:controller\": {},\n                                        \"f:kind\": {},\n                                        \"f:name\": {},\n                                        \"f:uid\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:affinity\": {\n                                    \"f:nodeAffinity\": {\n                                        \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n                                            \"f:nodeSelectorTerms\": {},\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:containers\": {\n                                    \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n                                        \"f:command\": {},\n                                        \"f:env\": {\n                                            \"k:{\\\"name\\\":\\\"NODE_NAME\\\"}\": {\n                                                \"f:name\": {},\n                                                \"f:valueFrom\": {\n                                                    \"f:fieldRef\": {\n                                                        \"f:apiVersion\": {},\n                                                        \"f:fieldPath\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \"f:image\": {},\n                                        \"f:imagePullPolicy\": {},\n                                        \"f:name\": {},\n                                        \"f:resources\": {},\n                                        \"f:securityContext\": {\n                                            \"f:privileged\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:terminationMessagePath\": {},\n                                        \"f:terminationMessagePolicy\": {},\n                                        \"f:volumeMounts\": {\n                                            \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/var/lib/kube-proxy\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:dnsPolicy\": {},\n                                \"f:enableServiceLinks\": {},\n                                \"f:hostNetwork\": {},\n                                \"f:nodeSelector\": {\n                                    \"f:beta.kubernetes.io/os\": {},\n                                    \".\": {}\n                                },\n                                \"f:priority\": {},\n                                \"f:priorityClassName\": {},\n                                \"f:restartPolicy\": {},\n                                \"f:schedulerName\": {},\n                                \"f:securityContext\": {},\n                                \"f:serviceAccount\": {},\n                                \"f:serviceAccountName\": {},\n                                \"f:terminationGracePeriodSeconds\": {},\n                                \"f:tolerations\": {},\n                                \"f:volumes\": {\n                                    \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n                                        \"f:configMap\": {\n                                            \"f:defaultMode\": {},\n                                            \"f:name\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"kube-proxy-token-g5vsq\\\"}\": {\n                                        \"f:name\": {},\n                                        \"f:secret\": {\n                                            \"f:secretName\": {},\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:27:45Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:status\": {\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:containerStatuses\": {},\n                                \"f:hostIP\": {},\n                                \"f:phase\": {},\n                                \"f:podIP\": {},\n                                \"f:podIPs\": {\n                                    \"k:{\\\"ip\\\":\\\"172.17.0.2\\\"}\": {\n                                        \"f:ip\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:startTime\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-proxy-token-g5vsq\",\n                        \"secret\": {\n                            \"secretName\": \"kube-proxy-token-g5vsq\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-proxy-token-g5vsq\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-control-plane\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:43Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:45Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:45Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:43Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:27:43Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:27:45Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"imageID\": \"sha256:f3319968dae006eadfe699d5120c818675e12f7e179d3b6b27f501eac5fbc314\",\n                        \"containerID\": \"containerd://43d69085dd5d5aafff14f6373ed82b9a0b965edcb83642bfb2a2c35b37dbd1ef\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-sllbk\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-proxy-sllbk\",\n                \"uid\": \"0fcef59d-80b6-4e1f-8478-2e8bfd13129c\",\n                \"resourceVersion\": \"565\",\n                \"creationTimestamp\": \"2020-01-14T21:28:01Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"77b478d68\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ],\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:01Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:generateName\": {},\n                                \"f:labels\": {\n                                    \"f:controller-revision-hash\": {},\n                                    \"f:k8s-app\": {},\n                                    \"f:pod-template-generation\": {},\n                                    \".\": {}\n                                },\n                                \"f:ownerReferences\": {\n                                    \"k:{\\\"uid\\\":\\\"be6ce503-aa7e-43b3-9c82-1e67fd75e9df\\\"}\": {\n                                        \"f:apiVersion\": {},\n                                        \"f:blockOwnerDeletion\": {},\n                                        \"f:controller\": {},\n                                        \"f:kind\": {},\n                                        \"f:name\": {},\n                                        \"f:uid\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:affinity\": {\n                                    \"f:nodeAffinity\": {\n                                        \"f:requiredDuringSchedulingIgnoredDuringExecution\": {\n                                            \"f:nodeSelectorTerms\": {},\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:containers\": {\n                                    \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n                                        \"f:command\": {},\n                                        \"f:env\": {\n                                            \"k:{\\\"name\\\":\\\"NODE_NAME\\\"}\": {\n                                                \"f:name\": {},\n                                                \"f:valueFrom\": {\n                                                    \"f:fieldRef\": {\n                                                        \"f:apiVersion\": {},\n                                                        \"f:fieldPath\": {},\n                                                        \".\": {}\n                                                    },\n                                                    \".\": {}\n                                                },\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \"f:image\": {},\n                                        \"f:imagePullPolicy\": {},\n                                        \"f:name\": {},\n                                        \"f:resources\": {},\n                                        \"f:securityContext\": {\n                                            \"f:privileged\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:terminationMessagePath\": {},\n                                        \"f:terminationMessagePolicy\": {},\n                                        \"f:volumeMounts\": {\n                                            \"k:{\\\"mountPath\\\":\\\"/lib/modules\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/run/xtables.lock\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/var/lib/kube-proxy\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \".\": {}\n                                            },\n                                            \"k:{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\"}\": {\n                                                \"f:mountPath\": {},\n                                                \"f:name\": {},\n                                                \"f:readOnly\": {},\n                                                \".\": {}\n                                            },\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:dnsPolicy\": {},\n                                \"f:enableServiceLinks\": {},\n                                \"f:hostNetwork\": {},\n                                \"f:nodeSelector\": {\n                                    \"f:beta.kubernetes.io/os\": {},\n                                    \".\": {}\n                                },\n                                \"f:priority\": {},\n                                \"f:priorityClassName\": {},\n                                \"f:restartPolicy\": {},\n                                \"f:schedulerName\": {},\n                                \"f:securityContext\": {},\n                                \"f:serviceAccount\": {},\n                                \"f:serviceAccountName\": {},\n                                \"f:terminationGracePeriodSeconds\": {},\n                                \"f:tolerations\": {},\n                                \"f:volumes\": {\n                                    \"k:{\\\"name\\\":\\\"kube-proxy\\\"}\": {\n                                        \"f:configMap\": {\n                                            \"f:defaultMode\": {},\n                                            \"f:name\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"kube-proxy-token-g5vsq\\\"}\": {\n                                        \"f:name\": {},\n                                        \"f:secret\": {\n                                            \"f:secretName\": {},\n                                            \".\": {}\n                                        },\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"lib-modules\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"name\\\":\\\"xtables-lock\\\"}\": {\n                                        \"f:hostPath\": {\n                                            \"f:path\": {},\n                                            \"f:type\": {},\n                                            \".\": {}\n                                        },\n                                        \"f:name\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-01-14T21:28:06Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:status\": {\n                                \"f:conditions\": {\n                                    \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \"f:lastProbeTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {},\n                                        \".\": {}\n                                    }\n                                },\n                                \"f:containerStatuses\": {},\n                                \"f:hostIP\": {},\n                                \"f:phase\": {},\n                                \"f:podIP\": {},\n                                \"f:podIPs\": {\n                                    \"k:{\\\"ip\\\":\\\"172.17.0.3\\\"}\": {\n                                        \"f:ip\": {},\n                                        \".\": {}\n                                    },\n                                    \".\": {}\n                                },\n                                \"f:startTime\": {}\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-proxy-token-g5vsq\",\n                        \"secret\": {\n                            \"secretName\": \"kube-proxy-token-g5vsq\",\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-proxy-token-g5vsq\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"beta.kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-worker2\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker2\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:01Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:06Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:06Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:28:01Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.3\",\n                \"podIP\": \"172.17.0.3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.3\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:28:01Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:28:05Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"imageID\": \"sha256:f3319968dae006eadfe699d5120c818675e12f7e179d3b6b27f501eac5fbc314\",\n                        \"containerID\": \"containerd://c6874a0bab45b722ce08d1030abb6176446f2d8227d6968633ac3970967fbd68\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"selfLink\": \"/api/v1/namespaces/kube-system/pods/kube-scheduler-kind-control-plane\",\n                \"uid\": \"6207bc05-ad62-4e21-8ae7-7bdf1eaa9c88\",\n                \"resourceVersion\": \"232\",\n                \"creationTimestamp\": \"2020-01-14T21:27:28Z\",\n                \"labels\": {\n                    \"component\": \"kube-scheduler\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"a64f54248aed474b4b566bb40d3fc1b0\",\n                    \"kubernetes.io/config.mirror\": \"a64f54248aed474b4b566bb40d3fc1b0\",\n                    \"kubernetes.io/config.seen\": \"2020-01-14T21:27:28.462516145Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"463295e6-ff31-4747-8659-7f8e155ce671\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/scheduler.conf\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"command\": [\n                            \"kube-scheduler\",\n                            \"--authentication-kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--authorization-kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--bind-address=127.0.0.1\",\n                            \"--kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--leader-elect=true\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/scheduler.conf\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10259,\n                                \"host\": \"127.0.0.1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 15,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2020-01-14T21:27:28Z\"\n                    }\n                ],\n                \"hostIP\": \"172.17.0.2\",\n                \"podIP\": \"172.17.0.2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"172.17.0.2\"\n                    }\n                ],\n                \"startTime\": \"2020-01-14T21:27:28Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2020-01-14T21:27:20Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler:v1.18.0-alpha.1.681_c12a96f7f64648\",\n                        \"imageID\": \"sha256:2e5b757c9197bfb63173cc431b603a26e7128b256c91e006951164738f60d201\",\n                        \"containerID\": \"containerd://aee4a5f7c78b915ccab0576a10f9ebe964f33a8bbc6b79763310b91261c86726\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container coredns of pod kube-system/coredns-6955765f44-45qgf ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7\nCoreDNS-1.6.5\nlinux/amd64, go1.13.4, c2fd1b2\n==== END logs for container coredns of pod kube-system/coredns-6955765f44-45qgf ====\n==== START logs for container coredns of pod kube-system/coredns-6955765f44-blnrh ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7\nCoreDNS-1.6.5\nlinux/amd64, go1.13.4, c2fd1b2\n==== END logs for container coredns of pod kube-system/coredns-6955765f44-blnrh ====\n==== START logs for container etcd of pod kube-system/etcd-kind-control-plane ====\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2020-01-14 21:27:20.384732 I | etcdmain: etcd Version: 3.4.3\n2020-01-14 21:27:20.384793 I | etcdmain: Git SHA: 3cf2f69b5\n2020-01-14 21:27:20.384799 I | etcdmain: Go Version: go1.12.12\n2020-01-14 21:27:20.384804 I | etcdmain: Go OS/Arch: linux/amd64\n2020-01-14 21:27:20.384810 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2020-01-14 21:27:20.384943 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2020-01-14 21:27:20.385940 I | embed: name = kind-control-plane\n2020-01-14 21:27:20.385957 I | embed: data dir = /var/lib/etcd\n2020-01-14 21:27:20.385963 I | embed: member dir = /var/lib/etcd/member\n2020-01-14 21:27:20.385968 I | embed: heartbeat = 100ms\n2020-01-14 21:27:20.385972 I | embed: election = 1000ms\n2020-01-14 21:27:20.385977 I | embed: snapshot count = 10000\n2020-01-14 21:27:20.385995 I | embed: advertise client URLs = https://172.17.0.2:2379\n2020-01-14 21:27:20.395624 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f\nraft2020/01/14 21:27:20 INFO: b8e14bda2255bc24 switched to configuration voters=()\nraft2020/01/14 21:27:20 INFO: b8e14bda2255bc24 became follower at term 0\nraft2020/01/14 21:27:20 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\nraft2020/01/14 21:27:20 INFO: b8e14bda2255bc24 became follower at term 1\nraft2020/01/14 21:27:20 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)\n2020-01-14 21:27:20.414031 W | auth: simple token is not cryptographically signed\n2020-01-14 21:27:20.419785 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]\n2020-01-14 21:27:20.420741 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)\nraft2020/01/14 21:27:20 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)\n2020-01-14 21:27:20.421433 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f\n2020-01-14 21:27:20.422361 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2020-01-14 21:27:20.422560 I | embed: listening for metrics on http://127.0.0.1:2381\n2020-01-14 21:27:20.422751 I | embed: listening for peers on 172.17.0.2:2380\nraft2020/01/14 21:27:21 INFO: b8e14bda2255bc24 is starting a new election at term 1\nraft2020/01/14 21:27:21 INFO: b8e14bda2255bc24 became candidate at term 2\nraft2020/01/14 21:27:21 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2\nraft2020/01/14 21:27:21 INFO: b8e14bda2255bc24 became leader at term 2\nraft2020/01/14 21:27:21 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2\n2020-01-14 21:27:21.303619 I | etcdserver: setting up the initial cluster version to 3.4\n2020-01-14 21:27:21.307696 N | etcdserver/membership: set the initial cluster version to 3.4\n2020-01-14 21:27:21.307766 I | etcdserver/api: enabled capabilities for version 3.4\n2020-01-14 21:27:21.307805 I | etcdserver: published {Name:kind-control-plane ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f\n2020-01-14 21:27:21.307920 I | embed: ready to serve client requests\n2020-01-14 21:27:21.308045 I | embed: ready to serve client requests\n2020-01-14 21:27:21.313584 I | embed: serving client requests on 172.17.0.2:2379\n2020-01-14 21:27:21.314766 I | embed: serving client requests on 127.0.0.1:2379\n2020-01-14 21:28:51.658274 W | etcdserver: request \"header:<ID:13557083536848868673 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/namespaces/provisioning-1934\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/namespaces/provisioning-1934\\\" value_size:362 >> failure:<>>\" with result \"size:16\" took too long (138.983793ms) to execute\n2020-01-14 21:28:51.659928 W | etcdserver: read-only range request \"key:\\\"/registry/pods/emptydir-360/pod-c4b0682c-2c0c-4e03-b55c-33096e1b1ecf\\\" \" with result \"range_response_count:1 size:980\" took too long (233.866361ms) to execute\n2020-01-14 21:28:51.882125 W | etcdserver: read-only range request \"key:\\\"/registry/pods/mount-propagation-3051/master\\\" \" with result \"range_response_count:1 size:1157\" took too long (226.295815ms) to execute\n2020-01-14 21:28:51.892515 W | etcdserver: read-only range request \"key:\\\"/registry/pods/pods-1017/pod-exec-websocket-23972b99-2e23-4e53-8713-4a6a46e003f3\\\" \" with result \"range_response_count:1 size:1136\" took too long (235.42549ms) to execute\n2020-01-14 21:28:51.892645 W | etcdserver: read-only range request \"key:\\\"/registry/pods/configmap-568/pod-configmaps-39819156-8e04-444c-9993-b544cacb400d\\\" \" with result \"range_response_count:1 size:1460\" took too long (249.009999ms) to execute\n2020-01-14 21:28:51.892709 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-245/hostexec-kind-worker-jsrgc\\\" \" with result \"range_response_count:1 size:1177\" took too long (256.54573ms) to execute\n2020-01-14 21:28:51.892766 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-3606/termination-message-containere9a6b53b-4f1a-4e04-abad-301cf9084d1d\\\" \" with result \"range_response_count:1 size:1306\" took too long (306.13968ms) to execute\n2020-01-14 21:28:51.892835 W | etcdserver: read-only range request \"key:\\\"/registry/pods/downward-api-291/annotationupdate0d75f49a-72cb-4bdc-91b8-788a38a007a8\\\" \" with result \"range_response_count:1 size:1654\" took too long (311.847986ms) to execute\n2020-01-14 21:28:51.892914 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-6087/hostexec-kind-worker2-tvk2r\\\" \" with result \"range_response_count:1 size:2955\" took too long (387.740748ms) to execute\n2020-01-14 21:28:51.892992 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-3410/hostexec-kind-worker-rtqfp\\\" \" with result \"range_response_count:1 size:1841\" took too long (401.310499ms) to execute\n2020-01-14 21:28:51.893047 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/projected-6774/\\\" range_end:\\\"/registry/limitranges/projected-67740\\\" \" with result \"range_response_count:0 size:5\" took too long (455.973624ms) to execute\n2020-01-14 21:28:52.259360 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-100/image-pull-test4fef726a-4a6b-4a41-b917-f6bcb3370c0a\\\" \" with result \"range_response_count:1 size:829\" took too long (472.111962ms) to execute\n2020-01-14 21:28:52.259517 W | etcdserver: request \"header:<ID:13557083536848868683 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/events/csi-mock-volumes-7542/csi-mockplugin-resizer.15e9de165650c8fb\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/events/csi-mock-volumes-7542/csi-mockplugin-resizer.15e9de165650c8fb\\\" value_size:434 lease:4333711499994092390 >> failure:<>>\" with result \"size:16\" took too long (166.59415ms) to execute\n2020-01-14 21:28:52.260156 W | etcdserver: read-only range request \"key:\\\"/registry/pods/dns-1301/dns-test-f76a055f-02a8-4627-ba72-68919a33dce8\\\" \" with result \"range_response_count:1 size:2285\" took too long (582.634614ms) to execute\n2020-01-14 21:28:52.260605 W | etcdserver: read-only range request \"key:\\\"/registry/minions\\\" range_end:\\\"/registry/miniont\\\" count_only:true \" with result \"range_response_count:0 size:7\" took too long (478.075152ms) to execute\n2020-01-14 21:28:52.260748 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/provisioning-1934/\\\" range_end:\\\"/registry/resourcequotas/provisioning-19340\\\" \" with result \"range_response_count:0 size:5\" took too long (597.663994ms) to execute\n2020-01-14 21:28:52.265205 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-runtime-1102/termination-message-container5d2fa30a-9e20-4125-8b6b-bf928599d982\\\" \" with result \"range_response_count:1 size:1294\" took too long (130.481924ms) to execute\n2020-01-14 21:28:52.266906 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-4632/\\\" range_end:\\\"/registry/pods/kubectl-46320\\\" \" with result \"range_response_count:1 size:2049\" took too long (269.161714ms) to execute\n2020-01-14 21:28:52.267011 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-195/hostpath-symlink-prep-provisioning-195\\\" \" with result \"range_response_count:1 size:2988\" took too long (188.365985ms) to execute\n2020-01-14 21:28:52.267090 W | etcdserver: read-only range request \"key:\\\"/registry/pods/csi-mock-volumes-7542/\\\" range_end:\\\"/registry/pods/csi-mock-volumes-75420\\\" \" with result \"range_response_count:2 size:3424\" took too long (371.977599ms) to execute\n2020-01-14 21:28:52.267387 W | etcdserver: read-only range request \"key:\\\"/registry/pods/persistent-local-volumes-test-3410/hostexec-kind-worker-rtqfp\\\" \" with result \"range_response_count:1 size:1841\" took too long (326.904753ms) to execute\n2020-01-14 21:28:52.268124 W | etcdserver: read-only range request \"key:\\\"/registry/pods/tables-2921/pod-1\\\" \" with result \"range_response_count:1 size:743\" took too long (372.30122ms) to execute\n2020-01-14 21:29:29.425122 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:440\" took too long (103.72108ms) to execute\n2020-01-14 21:29:50.031834 W | etcdserver: read-only range request \"key:\\\"/registry/podtemplates/tables-4388/template-0002\\\" \" with result \"range_response_count:1 size:347\" took too long (136.099128ms) to execute\n2020-01-14 21:29:50.032294 W | etcdserver: read-only range request \"key:\\\"/registry/events/projected-2048/pod-projected-configmaps-1cba314b-9d9b-4b56-8de7-00cb0fcea7d1.15e9de1ee213b0b0\\\" \" with result \"range_response_count:1 size:649\" took too long (134.952107ms) to execute\n2020-01-14 21:30:13.121067 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/csi-mock-volumes-7542/pvc-pfg98\\\" \" with result \"range_response_count:1 size:1581\" took too long (104.565721ms) to execute\n2020-01-14 21:31:05.245042 W | etcdserver: read-only range request \"key:\\\"/registry/pods/volume-174/hostexec-kind-worker2-j656h\\\" \" with result \"range_response_count:1 size:1229\" took too long (106.922997ms) to execute\n2020-01-14 21:31:05.245125 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/volume-174/pvc-7wrbq\\\" \" with result \"range_response_count:1 size:1026\" took too long (107.286268ms) to execute\n2020-01-14 21:31:05.250222 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/csi-mock-volumes-7542/pvc-pfg98\\\" \" with result \"range_response_count:1 size:1284\" took too long (107.218193ms) to execute\n2020-01-14 21:31:05.522020 W | etcdserver: read-only range request \"key:\\\"/registry/pods/ephemeral-4014/inline-volume-tester-vplmv\\\" \" with result \"range_response_count:1 size:3085\" took too long (129.218043ms) to execute\n2020-01-14 21:31:06.100350 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumes/pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\\\" \" with result \"range_response_count:1 size:732\" took too long (158.081919ms) to execute\n2020-01-14 21:31:06.161303 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumes/pvc-6909ecbc-b532-45f6-a650-3e9947fcf736\\\" \" with result \"range_response_count:1 size:732\" took too long (231.511266ms) to execute\n2020-01-14 21:31:06.169799 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/csi-mock-volumes-8822/pvc-bhf4k\\\" \" with result \"range_response_count:1 size:1213\" took too long (230.891159ms) to execute\n2020-01-14 21:31:06.171002 W | etcdserver: read-only range request \"key:\\\"/registry/cronjobs/secrets-2760/\\\" range_end:\\\"/registry/cronjobs/secrets-27600\\\" \" with result \"range_response_count:0 size:5\" took too long (231.041596ms) to execute\n2020-01-14 21:31:08.078862 W | etcdserver: read-only range request \"key:\\\"/registry/limitranges/secrets-2760/\\\" range_end:\\\"/registry/limitranges/secrets-27600\\\" \" with result \"range_response_count:0 size:5\" took too long (120.781469ms) to execute\n2020-01-14 21:31:08.079537 W | etcdserver: read-only range request \"key:\\\"/registry/pods/dns-9442/dns-test-91c385b0-e952-425b-ad56-3f7612c66e6f\\\" \" with result \"range_response_count:1 size:9426\" took too long (125.683354ms) to execute\n2020-01-14 21:31:08.079727 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/volumemode-7700/\\\" range_end:\\\"/registry/configmaps/volumemode-77000\\\" \" with result \"range_response_count:0 size:5\" took too long (122.021379ms) to execute\n2020-01-14 21:31:09.866092 W | etcdserver: read-only range request \"key:\\\"/registry/pods/volume-6476/hostpathsymlink-client\\\" \" with result \"range_response_count:1 size:3167\" took too long (106.108281ms) to execute\n2020-01-14 21:31:09.868992 W | etcdserver: read-only range request \"key:\\\"/registry/pods/services-2366/pod1\\\" \" with result \"range_response_count:1 size:1212\" took too long (109.064163ms) to execute\n2020-01-14 21:31:09.910226 W | etcdserver: request \"header:<ID:13557083536848897653 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/pods/job-3875/adopt-release-jg58t\\\" mod_revision:7845 > success:<request_put:<key:\\\"/registry/pods/job-3875/adopt-release-jg58t\\\" value_size:2079 >> failure:<request_range:<key:\\\"/registry/pods/job-3875/adopt-release-jg58t\\\" > >>\" with result \"size:16\" took too long (120.960423ms) to execute\n2020-01-14 21:31:09.914673 W | etcdserver: read-only range request \"key:\\\"/registry/poddisruptionbudgets/secret-namespace-514/\\\" range_end:\\\"/registry/poddisruptionbudgets/secret-namespace-5140\\\" \" with result \"range_response_count:0 size:5\" took too long (117.08238ms) to execute\n2020-01-14 21:31:09.917140 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/persistent-local-volumes-test-7782/default\\\" \" with result \"range_response_count:1 size:228\" took too long (119.832768ms) to execute\n2020-01-14 21:31:09.917413 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-1737/frontend-6c5f89d5d4-r7x9p\\\" \" with result \"range_response_count:1 size:1696\" took too long (155.979582ms) to execute\n2020-01-14 21:31:10.037033 W | etcdserver: read-only range request \"key:\\\"/registry/jobs/cronjob-9015/forbid-1579037460\\\" \" with result \"range_response_count:1 size:1522\" took too long (106.13772ms) to execute\n2020-01-14 21:31:10.042295 W | etcdserver: read-only range request \"key:\\\"/registry/replicasets/kubectl-1737/\\\" range_end:\\\"/registry/replicasets/kubectl-17370\\\" \" with result \"range_response_count:0 size:5\" took too long (107.501662ms) to execute\n2020-01-14 21:31:10.043083 W | etcdserver: read-only range request \"key:\\\"/registry/events/containers-849/client-containers-166494e7-0c59-433d-954d-4a82dd77e2f7.15e9de3301ebf76b\\\" \" with result \"range_response_count:1 size:577\" took too long (107.168611ms) to execute\n2020-01-14 21:31:10.043714 W | etcdserver: read-only range request \"key:\\\"/registry/ranges/serviceips\\\" \" with result \"range_response_count:1 size:125599\" took too long (107.954498ms) to execute\n2020-01-14 21:31:10.044706 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/secrets-9495/\\\" range_end:\\\"/registry/ingress/secrets-94950\\\" \" with result \"range_response_count:0 size:5\" took too long (106.824011ms) to execute\n2020-01-14 21:31:10.071200 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/csi-mock-volumes-8822/pvc-bhf4k\\\" \" with result \"range_response_count:1 size:1213\" took too long (101.294663ms) to execute\n2020-01-14 21:31:12.905960 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/kubectl-8260/cm1mjn46qfpsp\\\" \" with result \"range_response_count:1 size:279\" took too long (106.581332ms) to execute\n2020-01-14 21:31:13.224577 W | etcdserver: read-only range request \"key:\\\"/registry/events/kubectl-1737/frontend-6c5f89d5d4.15e9de2d95bbe3bf\\\" \" with result \"range_response_count:1 size:462\" took too long (149.576199ms) to execute\n2020-01-14 21:31:13.224896 W | etcdserver: read-only range request \"key:\\\"/registry/poddisruptionbudgets/kubectl-7013/\\\" range_end:\\\"/registry/poddisruptionbudgets/kubectl-70130\\\" \" with result \"range_response_count:0 size:5\" took too long (158.35322ms) to execute\n2020-01-14 21:31:13.225658 W | etcdserver: read-only range request \"key:\\\"/registry/events/volume-174/local-injector.15e9de27404a34f6\\\" \" with result \"range_response_count:1 size:443\" took too long (150.682466ms) to execute\n2020-01-14 21:31:13.242930 W | etcdserver: read-only range request \"key:\\\"/registry/endpointslices/webhook-9370/\\\" range_end:\\\"/registry/endpointslices/webhook-93700\\\" \" with result \"range_response_count:0 size:5\" took too long (150.541261ms) to execute\n2020-01-14 21:31:13.246148 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/services-2366/endpoint-test2\\\" \" with result \"range_response_count:1 size:456\" took too long (106.716189ms) to execute\n2020-01-14 21:31:16.998074 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/default/kubernetes\\\" \" with result \"range_response_count:1 size:213\" took too long (151.091483ms) to execute\n2020-01-14 21:31:19.196560 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-1737/agnhost-slave-774cfc759f-n24sl\\\" \" with result \"range_response_count:1 size:1533\" took too long (123.176619ms) to execute\n2020-01-14 21:31:19.199878 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/csi-mock-volumes-5275/pvc-vksrv\\\" \" with result \"range_response_count:1 size:887\" took too long (114.645912ms) to execute\n2020-01-14 21:31:26.186228 W | etcdserver: read-only range request \"key:\\\"/registry/events/kubectl-8260/\\\" range_end:\\\"/registry/events/kubectl-82600\\\" \" with result \"range_response_count:23 size:12774\" took too long (121.11077ms) to execute\n2020-01-14 21:31:35.338151 W | etcdserver: read-only range request \"key:\\\"/registry/configmaps/ephemeral-3610/\\\" range_end:\\\"/registry/configmaps/ephemeral-36100\\\" \" with result \"range_response_count:0 size:5\" took too long (105.312797ms) to execute\n2020-01-14 21:31:45.687524 I | etcdserver: start to snapshot (applied: 10001, lastsnap: 0)\n2020-01-14 21:31:45.712863 I | etcdserver: saved snapshot at index 10001\n2020-01-14 21:31:45.713064 I | etcdserver: compacted raft log at 5001\n2020-01-14 21:31:46.158958 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-8777/hostpath-symlink-prep-provisioning-8777\\\" \" with result \"range_response_count:1 size:1909\" took too long (124.250776ms) to execute\n2020-01-14 21:32:00.548464 W | etcdserver: read-only range request \"key:\\\"/registry/crd-publish-openapi-test-unknown-in-nested.example.com/e2e-test-crd-publish-openapi-453-crds/job-3875/\\\" range_end:\\\"/registry/crd-publish-openapi-test-unknown-in-nested.example.com/e2e-test-crd-publish-openapi-453-crds/job-38750\\\" \" with result \"range_response_count:0 size:5\" took too long (161.000161ms) to execute\n2020-01-14 21:32:00.550854 W | etcdserver: read-only range request \"key:\\\"/registry/crd-publish-openapi-test-unknown-in-nested.example.com/e2e-test-crd-publish-openapi-453-crds\\\" range_end:\\\"/registry/crd-publish-openapi-test-unknown-in-nested.example.com/e2e-test-crd-publish-openapi-453-crdt\\\" count_only:true \" with result \"range_response_count:0 size:5\" took too long (163.270664ms) to execute\n2020-01-14 21:32:00.551524 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:613\" took too long (102.825518ms) to execute\n2020-01-14 21:32:00.552492 W | etcdserver: read-only range request \"key:\\\"/registry/crd-publish-openapi-test-unknown-in-nested.example.com/e2e-test-crd-publish-openapi-453-crds/\\\" range_end:\\\"/registry/crd-publish-openapi-test-unknown-in-nested.example.com/e2e-test-crd-publish-openapi-453-crds0\\\" limit:10000 \" with result \"range_response_count:0 size:5\" took too long (164.881218ms) to execute\n2020-01-14 21:32:00.553031 W | etcdserver: read-only range request \"key:\\\"/registry/pods/provisioning-8777/pod-subpath-test-inlinevolume-8dwm\\\" \" with result \"range_response_count:1 size:2657\" took too long (163.717357ms) to execute\n2020-01-14 21:32:00.553117 W | etcdserver: read-only range request \"key:\\\"/registry/pods/volume-8964/configmap-client\\\" \" with result \"range_response_count:1 size:1420\" took too long (117.545294ms) to execute\n2020-01-14 21:32:00.553177 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/kubelet-test-6614\\\" \" with result \"range_response_count:1 size:1876\" took too long (149.230419ms) to execute\n2020-01-14 21:32:00.553303 W | etcdserver: read-only range request \"key:\\\"/registry/services/endpoints/local-path-storage/rancher.io-local-path\\\" \" with result \"range_response_count:1 size:489\" took too long (150.347201ms) to execute\n==== END logs for container etcd of pod kube-system/etcd-kind-control-plane ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-2hf8t ====\nI0114 21:27:47.131587       1 main.go:64] hostIP = 172.17.0.2\npodIP = 172.17.0.2\nI0114 21:27:47.733581       1 main.go:150] handling current node\nI0114 21:27:57.831898       1 main.go:150] handling current node\nI0114 21:28:07.934128       1 main.go:150] handling current node\nI0114 21:28:07.934170       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:07.934179       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:07.934420       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} \nI0114 21:28:07.934478       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:07.934485       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:07.934589       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} \nI0114 21:28:18.143365       1 main.go:150] handling current node\nI0114 21:28:18.143419       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:18.143427       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:18.143567       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:18.143575       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:28.235600       1 main.go:150] handling current node\nI0114 21:28:28.235641       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:28.235650       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:28.235771       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:28.235788       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:38.336114       1 main.go:150] handling current node\nI0114 21:28:38.336154       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:38.336163       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:38.336301       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:38.336319       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:48.351182       1 main.go:150] handling current node\nI0114 21:28:48.435315       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:48.435590       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:48.435784       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:48.435801       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:58.532234       1 main.go:150] handling current node\nI0114 21:28:58.532265       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:58.532272       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:58.532366       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:58.532371       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:08.632400       1 main.go:150] handling current node\nI0114 21:29:08.632639       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:08.632675       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:08.632792       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:08.632811       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:18.733022       1 main.go:150] handling current node\nI0114 21:29:18.733100       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:18.733121       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:18.733240       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:18.733246       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:28.741714       1 main.go:150] handling current node\nI0114 21:29:28.741855       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:28.741917       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:28.742127       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:28.742206       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:38.835614       1 main.go:150] handling current node\nI0114 21:29:38.835648       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:38.835656       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:38.835792       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:38.835863       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:48.849032       1 main.go:150] handling current node\nI0114 21:29:48.931320       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:48.931562       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:48.931841       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:48.941206       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:58.947788       1 main.go:150] handling current node\nI0114 21:29:58.947822       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:58.947831       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:59.033262       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:59.033534       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:09.134006       1 main.go:150] handling current node\nI0114 21:30:09.134156       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:09.134192       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:09.135681       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:09.136003       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:19.232791       1 main.go:150] handling current node\nI0114 21:30:19.232838       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:19.232847       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:19.232992       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:19.233011       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:29.333028       1 main.go:150] handling current node\nI0114 21:30:29.333135       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:29.333164       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:29.333295       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:29.333312       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:39.337905       1 main.go:150] handling current node\nI0114 21:30:39.337935       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:39.337941       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:39.338056       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:39.338072       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:49.535050       1 main.go:150] handling current node\nI0114 21:30:49.535154       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:49.535415       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:49.535635       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:49.535691       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:59.632546       1 main.go:150] handling current node\nI0114 21:30:59.632595       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:59.632604       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:59.632733       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:59.632758       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:09.646676       1 main.go:150] handling current node\nI0114 21:31:09.646712       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:09.646721       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:09.646842       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:09.646849       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:19.686297       1 main.go:150] handling current node\nI0114 21:31:19.686347       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:19.686356       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:19.686483       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:19.686498       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:29.731642       1 main.go:150] handling current node\nI0114 21:31:29.731699       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:29.731707       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:29.731898       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:29.732090       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:39.838187       1 main.go:150] handling current node\nI0114 21:31:39.838225       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:39.838233       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:39.846987       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:39.847094       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:49.931909       1 main.go:150] handling current node\nI0114 21:31:49.932480       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:49.932899       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:49.933296       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:49.933548       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:32:00.032652       1 main.go:150] handling current node\nI0114 21:32:00.032689       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:32:00.032698       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:32:00.032858       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:32:00.032914       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:32:10.232412       1 main.go:150] handling current node\nI0114 21:32:10.232451       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:32:10.232460       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:32:10.232595       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:32:10.232610       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:32:20.332269       1 main.go:150] handling current node\nI0114 21:32:20.332643       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:32:20.332826       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:32:20.333028       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:32:20.333070       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:32:30.340133       1 main.go:150] handling current node\nI0114 21:32:30.340181       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:32:30.340190       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:32:30.340317       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:32:30.340338       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \n==== END logs for container kindnet-cni of pod kube-system/kindnet-2hf8t ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-6rhkp ====\nI0114 21:28:06.634660       1 main.go:64] hostIP = 172.17.0.4\npodIP = 172.17.0.4\nI0114 21:28:07.141412       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:07.141442       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:07.141574       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} \nI0114 21:28:07.141601       1 main.go:150] handling current node\nI0114 21:28:07.144263       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:07.144285       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:07.144391       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.17.0.3 Flags: [] Table: 0} \nI0114 21:28:17.150156       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:17.150285       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:17.150470       1 main.go:150] handling current node\nI0114 21:28:17.150531       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:17.150555       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:27.331755       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:27.331793       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:27.331921       1 main.go:150] handling current node\nI0114 21:28:27.331935       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:27.331941       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:37.336459       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:37.336488       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:37.336574       1 main.go:150] handling current node\nI0114 21:28:37.336595       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:37.336599       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:47.432331       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:47.432365       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:47.432494       1 main.go:150] handling current node\nI0114 21:28:47.432518       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:47.432524       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:28:57.532883       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:57.532916       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:57.533080       1 main.go:150] handling current node\nI0114 21:28:57.533099       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:28:57.533105       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:07.636425       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:07.636453       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:07.636659       1 main.go:150] handling current node\nI0114 21:29:07.636672       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:07.636677       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:17.736280       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:17.736311       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:17.736545       1 main.go:150] handling current node\nI0114 21:29:17.736562       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:17.736569       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:27.832473       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:27.832515       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:27.832758       1 main.go:150] handling current node\nI0114 21:29:27.832775       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:27.832782       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:37.941252       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:37.941282       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:37.941501       1 main.go:150] handling current node\nI0114 21:29:37.941523       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:37.941528       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:47.982446       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:47.982473       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:47.982745       1 main.go:150] handling current node\nI0114 21:29:47.982764       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:47.982771       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:29:58.032710       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:58.032754       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:58.033026       1 main.go:150] handling current node\nI0114 21:29:58.033039       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:29:58.033044       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:08.132558       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:08.132589       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:08.132864       1 main.go:150] handling current node\nI0114 21:30:08.132883       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:08.132888       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:18.232842       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:18.232878       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:18.233096       1 main.go:150] handling current node\nI0114 21:30:18.233112       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:18.233117       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:28.238920       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:28.238961       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:28.239182       1 main.go:150] handling current node\nI0114 21:30:28.331439       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:28.331477       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:38.435004       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:38.435039       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:38.435431       1 main.go:150] handling current node\nI0114 21:30:38.435460       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:38.435467       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:48.534443       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:48.534489       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:48.534822       1 main.go:150] handling current node\nI0114 21:30:48.534859       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:48.534882       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:30:58.548251       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:58.548307       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:58.548520       1 main.go:150] handling current node\nI0114 21:30:58.548544       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:30:58.548551       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:08.598075       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:08.598104       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:08.598401       1 main.go:150] handling current node\nI0114 21:31:08.598420       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:08.598426       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:18.634968       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:18.634999       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:18.635279       1 main.go:150] handling current node\nI0114 21:31:18.635305       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:18.635313       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:28.642234       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:28.642267       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:28.642507       1 main.go:150] handling current node\nI0114 21:31:28.642526       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:28.642531       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:38.732730       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:38.732775       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:38.733186       1 main.go:150] handling current node\nI0114 21:31:38.733205       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:38.733210       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:48.840144       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:48.840232       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:48.840541       1 main.go:150] handling current node\nI0114 21:31:48.840565       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:48.840572       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:31:58.933901       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:58.933932       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:58.934132       1 main.go:150] handling current node\nI0114 21:31:58.934148       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:31:58.934153       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:32:09.032905       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:32:09.032942       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:32:09.033145       1 main.go:150] handling current node\nI0114 21:32:09.033162       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:32:09.033175       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:32:19.132766       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:32:19.132800       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:32:19.133008       1 main.go:150] handling current node\nI0114 21:32:19.133027       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:32:19.133033       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \nI0114 21:32:29.236250       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:32:29.236279       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:32:29.236473       1 main.go:150] handling current node\nI0114 21:32:29.236488       1 main.go:161] Handling node with IP: 172.17.0.3\nI0114 21:32:29.236496       1 main.go:162] Node kind-worker2 has CIDR 10.244.1.0/24 \n==== END logs for container kindnet-cni of pod kube-system/kindnet-6rhkp ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-jxzbl ====\nI0114 21:28:06.632063       1 main.go:64] hostIP = 172.17.0.3\npodIP = 172.17.0.3\nI0114 21:28:07.135979       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:07.136006       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:07.136298       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.0.0/24 Src: <nil> Gw: 172.17.0.2 Flags: [] Table: 0} \nI0114 21:28:07.136372       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:07.136376       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:07.136449       1 routes.go:47] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.17.0.4 Flags: [] Table: 0} \nI0114 21:28:07.136459       1 main.go:150] handling current node\nI0114 21:28:17.332451       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:17.332494       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:17.333480       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:17.333546       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:17.333646       1 main.go:150] handling current node\nI0114 21:28:27.432269       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:27.432300       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:27.432428       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:27.432436       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:27.432611       1 main.go:150] handling current node\nI0114 21:28:37.534388       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:37.534413       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:37.534507       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:37.534516       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:37.534565       1 main.go:150] handling current node\nI0114 21:28:47.633895       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:47.633927       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:47.634050       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:47.634059       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:47.634111       1 main.go:150] handling current node\nI0114 21:28:57.731929       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:28:57.731958       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:28:57.732118       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:28:57.732127       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:28:57.732202       1 main.go:150] handling current node\nI0114 21:29:07.745618       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:07.745654       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:07.745937       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:07.745951       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:07.746725       1 main.go:150] handling current node\nI0114 21:29:17.831881       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:17.831910       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:17.832138       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:17.832147       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:17.832516       1 main.go:150] handling current node\nI0114 21:29:27.932763       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:27.932793       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:27.933035       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:27.933045       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:27.933163       1 main.go:150] handling current node\nI0114 21:29:37.941940       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:37.941972       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:37.942180       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:37.942199       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:37.942327       1 main.go:150] handling current node\nI0114 21:29:47.987846       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:47.987874       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:47.988082       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:47.988091       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:47.988249       1 main.go:150] handling current node\nI0114 21:29:58.016872       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:29:58.016901       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:29:58.040784       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:29:58.040816       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:29:58.041058       1 main.go:150] handling current node\nI0114 21:30:08.132596       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:08.132630       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:08.132906       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:08.132933       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:08.134751       1 main.go:150] handling current node\nI0114 21:30:18.332974       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:18.333013       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:18.333315       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:18.333337       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:18.333525       1 main.go:150] handling current node\nI0114 21:30:28.434682       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:28.434714       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:28.434920       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:28.434926       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:28.435037       1 main.go:150] handling current node\nI0114 21:30:38.536788       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:38.536814       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:38.537086       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:38.537103       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:38.537256       1 main.go:150] handling current node\nI0114 21:30:48.573195       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:48.573240       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:48.573572       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:48.573586       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:48.573774       1 main.go:150] handling current node\nI0114 21:30:58.637906       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:30:58.637940       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:30:58.638146       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:30:58.638155       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:30:58.638285       1 main.go:150] handling current node\nI0114 21:31:08.673845       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:08.673875       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:08.674172       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:08.674182       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:08.674331       1 main.go:150] handling current node\nI0114 21:31:18.733211       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:18.733240       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:18.738549       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:18.738597       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:18.738842       1 main.go:150] handling current node\nI0114 21:31:28.835399       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:28.835432       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:28.840499       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:28.840542       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:28.878165       1 main.go:150] handling current node\nI0114 21:31:38.939055       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:38.939107       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:38.939620       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:38.939639       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:38.939821       1 main.go:150] handling current node\nI0114 21:31:49.134978       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:49.135022       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:49.221625       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:49.221655       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:49.221853       1 main.go:150] handling current node\nI0114 21:31:59.232828       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:31:59.232865       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:31:59.247536       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:31:59.247570       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:31:59.247732       1 main.go:150] handling current node\nI0114 21:32:09.251908       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:32:09.251942       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:32:09.252202       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:32:09.252220       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:32:09.252517       1 main.go:150] handling current node\nI0114 21:32:19.334344       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:32:19.334372       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:32:19.334610       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:32:19.334627       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:32:19.334762       1 main.go:150] handling current node\nI0114 21:32:29.436825       1 main.go:161] Handling node with IP: 172.17.0.2\nI0114 21:32:29.436854       1 main.go:162] Node kind-control-plane has CIDR 10.244.0.0/24 \nI0114 21:32:29.437093       1 main.go:161] Handling node with IP: 172.17.0.4\nI0114 21:32:29.437103       1 main.go:162] Node kind-worker has CIDR 10.244.2.0/24 \nI0114 21:32:29.437264       1 main.go:150] handling current node\n==== END logs for container kindnet-cni of pod kube-system/kindnet-jxzbl ====\n==== START logs for container kube-apiserver of pod kube-system/kube-apiserver-kind-control-plane ====\nFlag --insecure-port has been deprecated, This flag will be removed in a future version.\nI0114 21:27:20.489558       1 server.go:596] external host was not specified, using 172.17.0.2\nI0114 21:27:20.489988       1 server.go:150] Version: v1.18.0-alpha.1.681+c12a96f7f64648\nI0114 21:27:21.705910       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.\nI0114 21:27:21.705951       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.\nI0114 21:27:21.706881       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.\nI0114 21:27:21.706909       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.\nI0114 21:27:21.709658       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.709716       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.727420       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.727466       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.738741       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.738795       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.802787       1 master.go:264] Using reconciler: lease\nI0114 21:27:21.803439       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.803475       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.817619       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.817662       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.831956       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.831994       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.842956       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.842994       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.854912       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.854961       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.863812       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.863849       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.874637       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.874679       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.885553       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.885594       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.894719       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.894756       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.903404       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.903446       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.912294       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.912335       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.922035       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.922070       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.930790       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.930824       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.940452       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.940519       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.950933       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.950970       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.959474       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.959510       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.968347       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.968387       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.979762       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:21.979797       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:21.987727       1 rest.go:113] the default service ipfamily for this cluster is: IPv4\nI0114 21:27:22.133631       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.133683       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.145759       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.145803       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.154325       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.154367       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.162917       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.162951       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.175363       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.175401       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.184062       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.184101       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.192310       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.192341       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.202904       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.202941       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.212012       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.212042       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.223677       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.223722       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.232224       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.232263       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.244129       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.244178       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.252874       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.252913       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.261185       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.261219       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.269775       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.269822       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.279609       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.279655       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.291328       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.291366       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.300097       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.300140       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.313678       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.313716       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.326451       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.326500       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.338738       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.338776       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.347671       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.347724       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.357587       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.357627       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.372393       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.372424       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.384325       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.384367       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.395468       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.395553       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.410046       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.410092       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.450895       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.450942       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.461445       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.461497       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.472363       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.472411       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.483832       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.483874       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.496363       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.496401       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.504927       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.504961       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.515846       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.515893       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.524631       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.524672       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.533633       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.533678       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.543814       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.543851       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.567849       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.567894       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.576558       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.576607       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.587196       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.587310       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.595922       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.595955       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.605944       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.605979       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:22.704447       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.704487       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nW0114 21:27:22.817048       1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.\nW0114 21:27:22.827740       1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.\nW0114 21:27:22.839539       1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.\nW0114 21:27:22.877182       1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.\nW0114 21:27:22.884500       1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.\nW0114 21:27:22.915789       1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.\nW0114 21:27:22.963470       1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.\nW0114 21:27:22.963514       1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.\nI0114 21:27:22.985898       1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.\nI0114 21:27:22.985942       1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.\nI0114 21:27:22.989105       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:22.989158       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:23.002255       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:27:23.002302       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:27:25.307299       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nI0114 21:27:25.307310       1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nI0114 21:27:25.307602       1 dynamic_serving_content.go:129] Starting serving-cert::/etc/kubernetes/pki/apiserver.crt::/etc/kubernetes/pki/apiserver.key\nI0114 21:27:25.308139       1 secure_serving.go:178] Serving securely on [::]:6443\nI0114 21:27:25.308233       1 available_controller.go:386] Starting AvailableConditionController\nI0114 21:27:25.308246       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller\nI0114 21:27:25.308275       1 tlsconfig.go:241] Starting DynamicServingCertificateController\nI0114 21:27:25.308292       1 autoregister_controller.go:140] Starting autoregister controller\nI0114 21:27:25.308311       1 cache.go:32] Waiting for caches to sync for autoregister controller\nI0114 21:27:25.308433       1 apiservice_controller.go:94] Starting APIServiceRegistrationController\nI0114 21:27:25.308481       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller\nI0114 21:27:25.308494       1 crd_finalizer.go:264] Starting CRDFinalizer\nI0114 21:27:25.308521       1 naming_controller.go:289] Starting NamingConditionController\nI0114 21:27:25.308549       1 establishing_controller.go:74] Starting EstablishingController\nI0114 21:27:25.308555       1 controller.go:86] Starting OpenAPI controller\nI0114 21:27:25.308569       1 nonstructuralschema_controller.go:185] Starting NonStructuralSchemaConditionController\nI0114 21:27:25.308575       1 customresource_discovery_controller.go:209] Starting DiscoveryController\nI0114 21:27:25.308595       1 apiapproval_controller.go:184] Starting KubernetesAPIApprovalPolicyConformantConditionController\nI0114 21:27:25.308482       1 crdregistration_controller.go:111] Starting crd-autoregister controller\nI0114 21:27:25.308615       1 shared_informer.go:206] Waiting for caches to sync for crd-autoregister\nI0114 21:27:25.308363       1 controller.go:81] Starting OpenAPI AggregationController\nI0114 21:27:25.309444       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller\nI0114 21:27:25.309455       1 shared_informer.go:206] Waiting for caches to sync for cluster_authentication_trust_controller\nI0114 21:27:25.309532       1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nI0114 21:27:25.309570       1 dynamic_cafile_content.go:166] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nE0114 21:27:25.318011       1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: \nI0114 21:27:25.408491       1 cache.go:39] Caches are synced for AvailableConditionController controller\nI0114 21:27:25.408602       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller\nI0114 21:27:25.408656       1 cache.go:39] Caches are synced for autoregister controller\nI0114 21:27:25.409003       1 shared_informer.go:213] Caches are synced for crd-autoregister \nI0114 21:27:25.410407       1 shared_informer.go:213] Caches are synced for cluster_authentication_trust_controller \nI0114 21:27:26.307319       1 controller.go:107] OpenAPI AggregationController: Processing item \nI0114 21:27:26.307364       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).\nI0114 21:27:26.307380       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).\nI0114 21:27:26.313337       1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000\nI0114 21:27:26.318007       1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000\nI0114 21:27:26.318035       1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.\nI0114 21:27:26.691863       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io\nI0114 21:27:26.734555       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io\nW0114 21:27:26.802423       1 lease.go:224] Resetting endpoints for master service \"kubernetes\" to [172.17.0.2]\nI0114 21:27:26.803460       1 controller.go:606] quota admission added evaluator for: endpoints\nI0114 21:27:27.489785       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io\nI0114 21:27:28.048153       1 controller.go:606] quota admission added evaluator for: serviceaccounts\nI0114 21:27:28.376312       1 controller.go:606] quota admission added evaluator for: deployments.apps\nI0114 21:27:28.518266       1 controller.go:606] quota admission added evaluator for: daemonsets.apps\nI0114 21:27:43.764993       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps\nI0114 21:27:43.772370       1 controller.go:606] quota admission added evaluator for: replicasets.apps\nI0114 21:28:47.458888       1 controller.go:606] quota admission added evaluator for: cronjobs.batch\nI0114 21:28:48.747347       1 controller.go:606] quota admission added evaluator for: statefulsets.apps\nI0114 21:28:50.930643       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:28:50.930699       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:28:51.900800       1 controller.go:606] quota admission added evaluator for: jobs.batch\nI0114 21:28:52.262291       1 trace.go:116] Trace[441321173]: \"Create\" url:/api/v1/namespaces/projected-6774/pods,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance],client:172.17.0.1 (started: 2020-01-14 21:28:51.426392447 +0000 UTC m=+91.055274850) (total time: 835.842887ms):\nTrace[441321173]: [473.478919ms] [473.340507ms] About to store object in database\nTrace[441321173]: [835.770706ms] [362.291787ms] Object stored in database\nI0114 21:28:52.262592       1 trace.go:116] Trace[1817877885]: \"Get\" url:/api/v1/namespaces/dns-1301/pods/dns-test-f76a055f-02a8-4627-ba72-68919a33dce8,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-network] DNS should provide DNS for ExternalName services [Conformance],client:172.17.0.1 (started: 2020-01-14 21:28:51.675898987 +0000 UTC m=+91.304781392) (total time: 586.650814ms):\nTrace[1817877885]: [586.59199ms] [586.582157ms] About to write a response\nI0114 21:28:52.264510       1 trace.go:116] Trace[1223990433]: \"List etcd3\" key:/resourcequotas/provisioning-1934,resourceVersion:,limit:0,continue: (started: 2020-01-14 21:28:51.661312758 +0000 UTC m=+91.290195158) (total time: 603.171369ms):\nTrace[1223990433]: [603.171369ms] [603.171369ms] END\nI0114 21:28:52.264788       1 trace.go:116] Trace[2048983966]: \"List\" url:/api/v1/namespaces/provisioning-1934/resourcequotas,user-agent:kube-apiserver/v1.18.0 (linux/amd64) kubernetes/c12a96f,client:127.0.0.1 (started: 2020-01-14 21:28:51.661291642 +0000 UTC m=+91.290174036) (total time: 603.447625ms):\nTrace[2048983966]: [603.399359ms] [603.387404ms] Listing from storage done\nI0114 21:28:52.271858       1 trace.go:116] Trace[1894841817]: \"Create\" url:/api/v1/namespaces/provisioning-1934/serviceaccounts,user-agent:kube-controller-manager/v1.18.0 (linux/amd64) kubernetes/c12a96f/system:serviceaccount:kube-system:service-account-controller,client:172.17.0.2 (started: 2020-01-14 21:28:51.660359249 +0000 UTC m=+91.289241654) (total time: 608.524045ms):\nTrace[1894841817]: [608.483395ms] [608.242563ms] Object stored in database\nI0114 21:28:58.741075       1 trace.go:116] Trace[1734932823]: \"Get\" url:/api/v1/namespaces/kubectl-4632/pods/agnhost-master-h6rdz/log,user-agent:kubectl/v1.18.0 (linux/amd64) kubernetes/c12a96f,client:172.17.0.1 (started: 2020-01-14 21:28:58.141548419 +0000 UTC m=+97.770430819) (total time: 599.332019ms):\nTrace[1734932823]: [599.330352ms] [588.78844ms] Transformed response object\nI0114 21:29:02.560890       1 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy\nE0114 21:29:02.610498       1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.2:59350->172.17.0.3:10250: write: broken pipe\nE0114 21:29:02.610876       1 upgradeaware.go:371] Error proxying data from backend to client: tls: use of closed connection\nI0114 21:29:10.211679       1 controller.go:606] quota admission added evaluator for: namespaces\nW0114 21:29:17.159066       1 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured\nE0114 21:29:19.638174       1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.2:60908->172.17.0.4:10250: write: broken pipe\nE0114 21:29:19.638274       1 upgradeaware.go:371] Error proxying data from backend to client: tls: use of closed connection\nI0114 21:29:21.922928       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:29:21.922978       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:29:24.685880       1 controller.go:606] quota admission added evaluator for: e2e-test-crd-publish-openapi-6716-crds.crd-publish-openapi-test-empty.example.com\nI0114 21:29:26.252140       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:29:26.252310       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:29:26.265730       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:29:26.265790       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:29:26.280488       1 controller.go:606] quota admission added evaluator for: e2e-test-crd-webhook-3635-crds.stable.example.com\nI0114 21:29:26.449576       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:29:26.449625       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:29:26.485320       1 client.go:361] parsed scheme: \"endpoint\"\nI0114 21:29:26.485364       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0  <nil>}]\nI0114 21:29:26.789245       1 trace.go:116] Trace[1403165342]: \"Delete\" url:/api/v1/namespaces/disruption-1481/events (started: 2020-01-14 21:29:26.092546102 +0000 UTC m=+125.721428499) (total time: 696.665001ms):\nTrace[1403165342]: [696.665001ms] [696.665001ms] END\nW0114 21:29:27.896194       1 dispatcher.go:141] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"validating-is-webhook-configuration-ready.k8s.io\\\" denied the request: this webhook denies all requests\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nW0114 21:29:27.907309       1 dispatcher.go:141] rejected by webhook \"deny-crd-with-unwanted-label.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ListMeta:v1.ListMeta{SelfLink:\"\", ResourceVersion:\"\", Continue:\"\", RemainingItemCount:(*int64)(nil)}, Status:\"Failure\", Message:\"admission webhook \\\"deny-crd-with-unwanted-label.k8s.io\\\" denied the request: the crd contains unwanted label\", Reason:\"\", Details:(*v1.StatusDetails)(nil), Code:400}}\nE0114 21:29:36.968772       1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.2:33152->172.17.0.4:10250: write: broken pipe\nE0114 21:29:36.968996       1 upgradeaware.go:371] Error proxying data from backend to client: tls: use of closed connection\nI0114 21:29:44.172135       1 controller.go:606] quota admission added evaluator for: podtemplates\nE0114 21:29:49.492558       1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.2:33528->172.17.0.4:10250: write: broken pipe\nI0114 21:29:50.587715       1 trace.go:116] Trace[149627181]: \"Delete\" url:/api/v1/namespaces/tables-4388/podtemplates (started: 2020-01-14 21:29:49.78948332 +0000 UTC m=+149.418365705) (total time: 798.194184ms):\nTrace[149627181]: [798.194184ms] [798.194184ms] END\nE0114 21:29:55.053429       1 upgradeaware.go:357] Error proxying data from client to backend: write tcp 172.17.0.2:33934->172.17.0.4:10250: write: broken pipe\nE0114 21:29:55.053800       1 upgradeaware.go:371] Error proxying data from backend to client: tls: use of closed connection\nI0114 21:30:03.197432       1 trace.go:116] Trace[1235685387]: \"Delete\" url:/api/v1/namespaces/dns-1301/events (started: 2020-01-14 21:30:02.173367705 +0000 UTC m=+161.802250096) (total time: 1.024027622s):\nTrace[1235685387]: [1.024027622s] [1.024027622s] END\nE0114 21:30:03.415743       1 upgradeaware.go:357] Error proxying data from client to backend: tls: use of closed connection\nI0114 21:30:06.591945       1 trace.go:116] Trace[415901570]: \"Get\" url:/api/v1/namespaces/provisioning-7976/pods/csi-hostpath-provisioner-0/log,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount,client:172.17.0.1 (started: 2020-01-14 21:29:36.350387908 +0000 UTC m=+135.979270317) (total time: 30.241510047s):\nTrace[415901570]: [30.241508528s] [30.234621659s] Transformed response object\nI0114 21:30:06.591997       1 trace.go:116] Trace[1290156845]: \"Get\" url:/api/v1/namespaces/provisioning-7976/pods/csi-hostpathplugin-0/log,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount,client:172.17.0.1 (started: 2020-01-14 21:29:43.35472784 +0000 UTC m=+142.983610238) (total time: 23.237232825s):\nTrace[1290156845]: [23.237231349s] [23.234401286s] Transformed response object\nI0114 21:30:06.591945       1 trace.go:116] Trace[553982727]: \"Get\" url:/api/v1/namespaces/provisioning-7976/pods/csi-hostpathplugin-0/log,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount,client:172.17.0.1 (started: 2020-01-14 21:29:43.397469666 +0000 UTC m=+143.026352084) (total time: 23.19442159s):\nTrace[553982727]: [23.194419561s] [23.191734639s] Transformed response object\nI0114 21:30:06.592064       1 trace.go:116] Trace[1808141683]: \"Get\" url:/api/v1/namespaces/provisioning-7976/pods/csi-hostpath-resizer-0/log,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount,client:172.17.0.1 (started: 2020-01-14 21:29:44.546464224 +0000 UTC m=+144.175346625) (total time: 22.045574741s):\nTrace[1808141683]: [22.045573776s] [22.042757008s] Transformed response object\nI0114 21:30:06.592209       1 trace.go:116] Trace[1655809365]: \"Get\" url:/api/v1/namespaces/provisioning-7976/pods/csi-snapshotter-0/log,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount,client:172.17.0.1 (started: 2020-01-14 21:29:51.392801234 +0000 UTC m=+151.021683633) (total time: 15.19937501s):\nTrace[1655809365]: [15.199373932s] [15.179367272s] Transformed response object\nI0114 21:30:06.592264       1 trace.go:116] Trace[1177877046]: \"Get\" url:/api/v1/namespaces/provisioning-7976/pods/csi-hostpath-attacher-0/log,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount,client:172.17.0.1 (started: 2020-01-14 21:29:35.75531125 +0000 UTC m=+135.384193663) (total time: 30.836925859s):\nTrace[1177877046]: [30.836924966s] [30.83059366s] Transformed response object\nI0114 21:30:06.592375       1 trace.go:116] Trace[454175094]: \"Get\" url:/api/v1/namespaces/provisioning-7976/pods/csi-hostpathplugin-0/log,user-agent:e2e.test/v0.0.0 (linux/amd64) kubernetes/$Format -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount,client:172.17.0.1 (started: 2020-01-14 21:29:43.370639934 +0000 UTC m=+142.999522343) (total time: 23.221711827s):\nTrace[454175094]: [23.221710841s] [23.213837407s] Transformed response object\nE0114 21:30:09.221129       1 upgradeaware.go:357] Error proxying data from client to backend: tls: use of closed connection\nE0114 21:30:09.605009       1 upgradeaware.go:357] Error proxying data from client to backend: tls: use of closed connection\nE0114 21:30:11.227585       1 upgradeaware.go:371] Error proxying data from backend to client: tls: use of closed connection\nI0114 21:30:12.210172       1 trace.go:116] Trace[1898607825]: \"Delete\" url:/api/v1/namespaces/provisioning-7976/events (started: 2020-01-14 21:30:11.641877951 +0000 UTC m=+171.270760360) (total time: 568.252513ms):\nTrace[1898607825]: [568.252513ms] [568.252513ms] END\nW0114 21:30:24.971329       1 dispatcher.go:141] rejected by webhook \"validating-is-webhook-configuration-ready.k8s.io\": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\