This job view page is being replaced by Spyglass soon. Check out the new job view.
PRaojea: increase iptables sync period
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-07-07 07:44
Elapsed43m17s
Revision4edaa1c79f9518ffc07ba4c034de3d4112173521
Refs 1706

No Test Failures!


Error lines from build-log.txt

... skipping 221 lines ...
Analyzing: 4 targets (21 packages loaded, 27 targets configured)
Analyzing: 4 targets (491 packages loaded, 1520 targets configured)
Analyzing: 4 targets (1694 packages loaded, 13209 targets configured)
Analyzing: 4 targets (2307 packages loaded, 15750 targets configured)
Analyzing: 4 targets (2307 packages loaded, 15750 targets configured)
Analyzing: 4 targets (2308 packages loaded, 15750 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages lib (issue27856.go) and nointerface (nointerface.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: go: finding module for package domain.name/importdecl
can't load package: cannot find module providing package domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: exit status 1: go: finding module for package old.com/one
can't load package: cannot find module providing package old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "https://old.com/one?go-get=1": dial tcp 23.23.86.44:443: connect: connection refused
gazelle: finding module path for import titanic.biz/bar: exit status 1: go: finding module for package titanic.biz/bar
can't load package: cannot find module providing package titanic.biz/bar: module titanic.biz/bar: reading https://proxy.golang.org/titanic.biz/bar/@v/list: 410 Gone
	server response: not found: titanic.biz/bar@latest: unrecognized import path "titanic.biz/bar": parsing titanic.biz/bar: XML syntax error on line 1: expected attribute name in element
gazelle: finding module path for import titanic.biz/foo: exit status 1: go: finding module for package titanic.biz/foo
can't load package: cannot find module providing package titanic.biz/foo: module titanic.biz/foo: reading https://proxy.golang.org/titanic.biz/foo/@v/list: 410 Gone
	server response: not found: titanic.biz/foo@latest: unrecognized import path "titanic.biz/foo": parsing titanic.biz/foo: XML syntax error on line 1: expected attribute name in element
gazelle: finding module path for import fruit.io/pear: exit status 1: go: finding module for package fruit.io/pear
can't load package: cannot find module providing package fruit.io/pear: module fruit.io/pear: reading https://proxy.golang.org/fruit.io/pear/@v/list: 410 Gone
	server response: not found: fruit.io/pear@latest: unrecognized import path "fruit.io/pear": https fetch: Get "https://fruit.io/pear?go-get=1": x509: certificate is valid for *.gridserver.com, gridserver.com, not fruit.io
gazelle: finding module path for import fruit.io/banana: exit status 1: go: finding module for package fruit.io/banana
can't load package: cannot find module providing package fruit.io/banana: module fruit.io/banana: reading https://proxy.golang.org/fruit.io/banana/@v/list: 410 Gone
	server response: not found: fruit.io/banana@latest: unrecognized import path "fruit.io/banana": https fetch: Get "https://fruit.io/banana?go-get=1": x509: certificate is valid for *.gridserver.com, gridserver.com, not fruit.io
... skipping 159 lines ...
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 60)
WARNING: Waiting for server process to terminate (waited 30 seconds, waiting at most 60)
INFO: Waited 60 seconds for server process (pid=6808) to terminate.
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 10)
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 10)
INFO: Waited 10 seconds for server process (pid=6808) to terminate.
FATAL: Attempted to kill stale server process (pid=6808) using SIGKILL, but it did not die in a timely fashion.
+ true
+ pkill ^bazel
+ true
+ mkdir -p _output/bin/
+ cp bazel-bin/test/e2e/e2e.test _output/bin/
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f
... skipping 46 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.18.0.3
... skipping 4 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 34 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 34 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
... skipping 39 lines ...
I0707 08:05:07.066189     314 checks.go:376] validating the presence of executable ebtables
I0707 08:05:07.066337     314 checks.go:376] validating the presence of executable ethtool
I0707 08:05:07.066401     314 checks.go:376] validating the presence of executable socat
I0707 08:05:07.066480     314 checks.go:376] validating the presence of executable tc
I0707 08:05:07.066571     314 checks.go:376] validating the presence of executable touch
I0707 08:05:07.066695     314 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0707 08:05:07.114478     314 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0707 08:05:07.158563     314 checks.go:618] validating kubelet version
I0707 08:05:07.791350     314 checks.go:128] validating if the "kubelet" service is enabled and active
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
... skipping 101 lines ...
I0707 08:05:29.955055     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 53 milliseconds
I0707 08:05:30.456940     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 55 milliseconds
I0707 08:05:30.971108     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 49 milliseconds
I0707 08:05:31.503435     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 96 milliseconds
I0707 08:05:32.046045     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 145 milliseconds
I0707 08:05:42.400940     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s  in 10000 milliseconds
I0707 08:05:49.965747     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 7065 milliseconds
I0707 08:05:50.416561     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 12 milliseconds
I0707 08:05:50.918979     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 16 milliseconds
I0707 08:05:51.417612     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 6 milliseconds
I0707 08:05:51.909888     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 8 milliseconds
I0707 08:05:52.403007     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
I0707 08:05:52.907239     314 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 6 milliseconds
[apiclient] All control plane components are healthy after 38.084276 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0707 08:05:52.909520     314 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I0707 08:05:52.923062     314 round_trippers.go:443] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 11 milliseconds
I0707 08:05:52.938092     314 round_trippers.go:443] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 10 milliseconds
... skipping 36 lines ...
I0707 08:05:54.232317     314 kubeletfinalize.go:88] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
I0707 08:05:54.233020     314 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf
I0707 08:05:54.233734     314 kubeletfinalize.go:132] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": read tcp 127.0.0.1:48364->127.0.0.1:10248: read: connection reset by peer.
I0707 08:05:55.025969     314 round_trippers.go:443] GET https://kind-control-plane:6443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns 200 OK in 27 milliseconds
I0707 08:05:55.040523     314 round_trippers.go:443] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/kube-dns?timeout=10s 404 Not Found in 4 milliseconds
I0707 08:05:55.045402     314 round_trippers.go:443] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps/coredns?timeout=10s 404 Not Found in 3 milliseconds
I0707 08:05:55.067434     314 round_trippers.go:443] GET https://kind-control-plane:6443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-dns 200 OK in 21 milliseconds
I0707 08:05:55.109949     314 round_trippers.go:443] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 35 milliseconds
I0707 08:05:55.141036     314 round_trippers.go:443] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/clusterroles?timeout=10s 201 Created in 30 milliseconds
... skipping 62 lines ...
I0707 08:06:29.237597     765 checks.go:376] validating the presence of executable ebtables
I0707 08:06:29.237703     765 checks.go:376] validating the presence of executable ethtool
I0707 08:06:29.238389     765 checks.go:376] validating the presence of executable socat
I0707 08:06:29.238503     765 checks.go:376] validating the presence of executable tc
I0707 08:06:29.239204     765 checks.go:376] validating the presence of executable touch
I0707 08:06:29.239276     765 checks.go:520] running all checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_HUGETLB: enabled
CGROUPS_PIDS: enabled
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0707 08:06:29.281277     765 checks.go:406] checking whether the given node name is reachable using net.LookupHost
I0707 08:06:29.341078     765 checks.go:618] validating kubelet version
I0707 08:06:30.064666     765 checks.go:128] validating if the "kubelet" service is enabled and active
I0707 08:06:30.175440     765 checks.go:201] validating availability of port 10250
I0707 08:06:30.175696     765 checks.go:286] validating the existence of file /etc/kubernetes/pki/ca.crt
I0707 08:06:30.175732     765 checks.go:432] validating if the connectivity type is via proxy or direct
... skipping 67 lines ...
I0707 08:06:29.240741     767 checks.go:376] validating the presence of executable ebtables
I0707 08:06:29.240807     767 checks.go:376] validating the presence of executable ethtool
I0707 08:06:29.240877     767 checks.go:376] validating the presence of executable socat
I0707 08:06:29.241007     767 checks.go:376] validating the presence of executable tc
I0707 08:06:29.241030     767 checks.go:376] validating the presence of executable touch
I0707 08:06:29.241061     767 checks.go:520] running all checks
	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/4.15.0-1044-gke\n", err: exit status 1
I0707 08:06:29.321939     767 checks.go:406] checking whether the given node name is reachable using net.LookupHost
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.15.0-1044-gke
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
... skipping 89 lines ...

Running in parallel across 25 nodes

Jul  7 08:07:25.847: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:07:25.852: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jul  7 08:07:25.962: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Jul  7 08:07:26.241: INFO: The status of Pod kube-scheduler-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jul  7 08:07:26.241: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jul  7 08:07:26.241: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Jul  7 08:07:26.241: INFO: POD                                NODE                PHASE    GRACE  CONDITIONS
Jul  7 08:07:26.241: INFO: kube-scheduler-kind-control-plane  kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 08:06:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 08:06:02 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 08:06:02 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 08:06:02 +0000 UTC  }]
Jul  7 08:07:26.241: INFO: 
Jul  7 08:07:28.275: INFO: The status of Pod kube-scheduler-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jul  7 08:07:28.275: INFO: 11 / 12 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Jul  7 08:07:28.275: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Jul  7 08:07:28.275: INFO: POD                                NODE                PHASE    GRACE  CONDITIONS
Jul  7 08:07:28.275: INFO: kube-scheduler-kind-control-plane  kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 08:06:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 08:06:02 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-07 08:06:02 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-07 08:06:02 +0000 UTC  }]
Jul  7 08:07:28.275: INFO: 
Jul  7 08:07:30.306: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
... skipping 1118 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:07:31.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7224" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] GCP Volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 74 lines ...
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:07:30.641: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
Jul  7 08:07:32.152: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating projection with secret that has name secret-emptykey-test-ff87530f-66ce-4403-be69-6955df19f9ba
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
Jul  7 08:07:32.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7042" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:07:32.335: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 147 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:07:32.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4666" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 7 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 62 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:07:33.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-8087" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":-1,"completed":1,"skipped":7,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:07:33.510: INFO: Driver vsphere doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/framework/framework.go:175

... skipping 56 lines ...
• [SLOW TEST:10.217 seconds]
[sig-scheduling] LimitRange
test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:07:40.853: INFO: Only supported for providers [aws] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/framework/framework.go:175

... skipping 53 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:07:41.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-7843" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] server version should find the server version","total":-1,"completed":2,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:17.498 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:07:47.935: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
• [SLOW TEST:23.088 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
... skipping 557 lines ...
• [SLOW TEST:41.450 seconds]
[sig-network] Service endpoints latency
test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 2 lines ...
Jul  7 08:07:31.283: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-f6ce2092-3989-4631-9655-16771e74c67b
STEP: Creating a pod to test consume secrets
Jul  7 08:07:31.381: INFO: Waiting up to 5m0s for pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7" in namespace "secrets-3050" to be "Succeeded or Failed"
Jul  7 08:07:31.471: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 89.956618ms
Jul  7 08:07:33.553: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172086536s
Jul  7 08:07:35.680: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298374096s
Jul  7 08:07:38.077: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.695594659s
Jul  7 08:07:40.542: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.160514998s
Jul  7 08:07:42.739: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.357188379s
... skipping 9 lines ...
Jul  7 08:08:03.348: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.967048363s
Jul  7 08:08:05.444: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.062696294s
Jul  7 08:08:07.573: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.191754724s
Jul  7 08:08:09.942: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.560962706s
Jul  7 08:08:11.999: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.617145396s
STEP: Saw pod success
Jul  7 08:08:11.999: INFO: Pod "pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7" satisfied condition "Succeeded or Failed"
Jul  7 08:08:12.091: INFO: Trying to get logs from node kind-worker pod pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7 container secret-volume-test: <nil>
STEP: delete the pod
Jul  7 08:08:12.730: INFO: Waiting for pod pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7 to disappear
Jul  7 08:08:12.855: INFO: Pod pod-secrets-e44c34da-121f-4a8b-8dc3-bd44ef9936d7 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
... skipping 15 lines ...
Jul  7 08:07:30.853: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-map-4b0dec29-23c8-4e5e-86e6-7d11f0e1f562
STEP: Creating a pod to test consume secrets
Jul  7 08:07:30.955: INFO: Waiting up to 5m0s for pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7" in namespace "secrets-7484" to be "Succeeded or Failed"
Jul  7 08:07:30.979: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.509021ms
Jul  7 08:07:33.153: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198318009s
Jul  7 08:07:35.263: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308468418s
Jul  7 08:07:37.355: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.400316307s
Jul  7 08:07:39.443: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.48874096s
Jul  7 08:07:41.560: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.605136988s
... skipping 11 lines ...
Jul  7 08:08:06.663: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.708178936s
Jul  7 08:08:08.740: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 37.785013569s
Jul  7 08:08:10.859: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Pending", Reason="", readiness=false. Elapsed: 39.90392035s
Jul  7 08:08:12.913: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Running", Reason="", readiness=true. Elapsed: 41.958029478s
Jul  7 08:08:14.943: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 43.988546729s
STEP: Saw pod success
Jul  7 08:08:14.945: INFO: Pod "pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7" satisfied condition "Succeeded or Failed"
Jul  7 08:08:15.035: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7 container secret-volume-test: <nil>
STEP: delete the pod
Jul  7 08:08:15.626: INFO: Waiting for pod pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7 to disappear
Jul  7 08:08:15.651: INFO: Pod pod-secrets-b0227d8b-8b91-42b6-9a11-0aa51f6adaf7 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:45.237 seconds]
[sig-storage] Secrets
test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:08:15.724: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Jul  7 08:07:32.710: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-25c424f0-ca37-4a86-bfc4-90d6a236ced9
STEP: Creating a pod to test consume configMaps
Jul  7 08:07:32.980: INFO: Waiting up to 5m0s for pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe" in namespace "configmap-4886" to be "Succeeded or Failed"
Jul  7 08:07:33.153: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Pending", Reason="", readiness=false. Elapsed: 172.623362ms
Jul  7 08:07:35.265: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284754505s
Jul  7 08:07:37.366: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386055961s
Jul  7 08:07:39.436: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Pending", Reason="", readiness=false. Elapsed: 6.456149053s
Jul  7 08:07:41.566: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Pending", Reason="", readiness=false. Elapsed: 8.585985842s
Jul  7 08:07:43.600: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.619213036s
... skipping 11 lines ...
Jul  7 08:08:08.727: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Pending", Reason="", readiness=false. Elapsed: 35.746879996s
Jul  7 08:08:10.738: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Pending", Reason="", readiness=false. Elapsed: 37.758071289s
Jul  7 08:08:12.806: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Pending", Reason="", readiness=false. Elapsed: 39.825379696s
Jul  7 08:08:14.863: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Running", Reason="", readiness=true. Elapsed: 41.88221432s
Jul  7 08:08:17.315: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 44.334503643s
STEP: Saw pod success
Jul  7 08:08:17.315: INFO: Pod "pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe" satisfied condition "Succeeded or Failed"
Jul  7 08:08:17.621: INFO: Trying to get logs from node kind-worker pod pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe container configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:08:18.528: INFO: Waiting for pod pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe to disappear
Jul  7 08:08:18.618: INFO: Pod pod-configmaps-631f2532-467d-430b-bbf5-a11f34edadbe no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:48.088 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:08:18.798: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/framework/framework.go:175

... skipping 29 lines ...
STEP: creating execpod-noendpoints on node kind-worker
Jul  7 08:07:33.263: INFO: Creating new exec pod
Jul  7 08:08:15.467: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node kind-worker
Jul  7 08:08:15.467: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-9164 execpod-noendpoints8hxrt -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Jul  7 08:08:18.760: INFO: rc: 1
Jul  7 08:08:18.782: INFO: error contained 'REFUSED', as expected: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-9164 execpod-noendpoints8hxrt -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect --timeout=3s no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:175
Jul  7 08:08:18.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9164" for this suite.
[AfterEach] [sig-network] Services
... skipping 3 lines ...
• [SLOW TEST:48.260 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be rejected when no endpoints exist
  test/e2e/network/service.go:2668
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":1,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:08:18.962: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 73 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  test/e2e/framework/framework.go:175
Jul  7 08:08:19.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":2,"skipped":33,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:08:19.184: INFO: Only supported for providers [aws] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:175

... skipping 312 lines ...
[sig-storage] CSI Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: csi-hostpath]
  test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver "csi-hostpath" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:96
------------------------------
... skipping 33 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver gluster doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
... skipping 7 lines ...
Jul  7 08:07:31.047: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-map-955d9aff-833a-43b7-b421-140f0a1e6d12
STEP: Creating a pod to test consume configMaps
Jul  7 08:07:31.117: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c" in namespace "projected-1877" to be "Succeeded or Failed"
Jul  7 08:07:31.171: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 54.194698ms
Jul  7 08:07:33.337: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220631548s
Jul  7 08:07:35.407: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.290226452s
Jul  7 08:07:37.469: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.352164551s
Jul  7 08:07:39.518: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.400894554s
Jul  7 08:07:41.549: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.432228059s
... skipping 13 lines ...
Jul  7 08:08:10.738: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.62124723s
Jul  7 08:08:12.805: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 41.688784549s
Jul  7 08:08:14.871: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.754127217s
Jul  7 08:08:17.308: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Running", Reason="", readiness=true. Elapsed: 46.191557238s
Jul  7 08:08:19.315: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.198041663s
STEP: Saw pod success
Jul  7 08:08:19.330: INFO: Pod "pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c" satisfied condition "Succeeded or Failed"
Jul  7 08:08:19.456: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:08:19.718: INFO: Waiting for pod pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c to disappear
Jul  7 08:08:19.776: INFO: Pod pod-projected-configmaps-edc947dc-098c-4d82-9ce9-e2e7273f6f4c no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
... skipping 15 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jul  7 08:07:54.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf" in namespace "downward-api-1742" to be "Succeeded or Failed"
Jul  7 08:07:54.800: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.378424ms
Jul  7 08:07:56.820: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033124595s
Jul  7 08:07:58.962: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175223662s
Jul  7 08:08:01.015: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227969537s
Jul  7 08:08:03.185: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.398133929s
Jul  7 08:08:05.243: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.456133194s
... skipping 2 lines ...
Jul  7 08:08:11.540: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.753770633s
Jul  7 08:08:13.574: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.787276332s
Jul  7 08:08:15.630: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.843748569s
Jul  7 08:08:18.066: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 23.279667737s
Jul  7 08:08:20.147: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.360528418s
STEP: Saw pod success
Jul  7 08:08:20.147: INFO: Pod "downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf" satisfied condition "Succeeded or Failed"
Jul  7 08:08:20.180: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf container client-container: <nil>
STEP: delete the pod
Jul  7 08:08:20.630: INFO: Waiting for pod downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf to disappear
Jul  7 08:08:20.638: INFO: Pod downwardapi-volume-88342bea-88d4-416e-a844-d81164e0dbcf no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:26.348 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:52.680 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":1,"skipped":18,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:08:24.482: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 68 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:59
STEP: Creating configMap with name projected-configmap-test-volume-a07484e0-2b4f-4d6c-a0a1-72d401db4892
STEP: Creating a pod to test consume configMaps
Jul  7 08:08:20.147: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373" in namespace "projected-7189" to be "Succeeded or Failed"
Jul  7 08:08:20.178: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Pending", Reason="", readiness=false. Elapsed: 30.941208ms
Jul  7 08:08:22.262: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114642585s
Jul  7 08:08:24.421: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274017272s
Jul  7 08:08:26.610: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462557298s
Jul  7 08:08:28.675: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Pending", Reason="", readiness=false. Elapsed: 8.527949874s
Jul  7 08:08:30.831: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Pending", Reason="", readiness=false. Elapsed: 10.684139588s
Jul  7 08:08:32.858: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Pending", Reason="", readiness=false. Elapsed: 12.710574199s
Jul  7 08:08:34.922: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Pending", Reason="", readiness=false. Elapsed: 14.774465063s
Jul  7 08:08:37.221: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Pending", Reason="", readiness=false. Elapsed: 17.073389759s
Jul  7 08:08:39.243: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.09536206s
STEP: Saw pod success
Jul  7 08:08:39.243: INFO: Pod "pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373" satisfied condition "Succeeded or Failed"
Jul  7 08:08:39.324: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:08:39.494: INFO: Waiting for pod pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373 to disappear
Jul  7 08:08:39.503: INFO: Pod pod-projected-configmaps-ac9f0128-089a-41b5-a45f-33ce360fe373 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:20.131 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:59
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":98,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:08:20.733: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-2526d313-8bc2-4697-9c6c-f306d743b059
STEP: Creating a pod to test consume configMaps
Jul  7 08:08:21.172: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1" in namespace "projected-5358" to be "Succeeded or Failed"
Jul  7 08:08:21.230: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 58.392831ms
Jul  7 08:08:23.334: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161673288s
Jul  7 08:08:25.388: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216412186s
Jul  7 08:08:27.423: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.250959849s
Jul  7 08:08:29.528: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.356255545s
Jul  7 08:08:31.565: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.392826795s
Jul  7 08:08:33.628: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.455670963s
Jul  7 08:08:35.669: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.496458175s
Jul  7 08:08:37.708: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.535477443s
Jul  7 08:08:39.786: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Running", Reason="", readiness=true. Elapsed: 18.61393365s
Jul  7 08:08:41.911: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.738969341s
STEP: Saw pod success
Jul  7 08:08:41.911: INFO: Pod "pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1" satisfied condition "Succeeded or Failed"
Jul  7 08:08:41.958: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:08:42.072: INFO: Waiting for pod pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1 to disappear
Jul  7 08:08:42.108: INFO: Pod pod-projected-configmaps-a07615b7-ad75-4607-83a1-b6d097aa11e1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:21.430 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:08:42.170: INFO: Driver nfs doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:175

... skipping 127 lines ...
  test/e2e/framework/framework.go:597
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
Jul  7 08:08:25.160: INFO: Waiting for webhook configuration to be ready...
Jul  7 08:08:35.308: INFO: Waiting for webhook configuration to be ready...
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
Jul  7 08:08:37.896: INFO: Waiting for webhook configuration to be ready...
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jul  7 08:08:48.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6303" for this suite.
STEP: Destroying namespace "webhook-6303-markers" for this suite.
... skipping 4 lines ...
• [SLOW TEST:79.128 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:08:49.794: INFO: Only supported for providers [openstack] (not skeleton)
[AfterEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/framework/framework.go:175

... skipping 48 lines ...
• [SLOW TEST:11.272 seconds]
[sig-api-machinery] Watchers
test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":4,"skipped":99,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 77 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:162
    should be able to handle large requests: udp
    test/e2e/network/networking.go:318
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp","total":-1,"completed":1,"skipped":0,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 45 lines ...
• [SLOW TEST:89.457 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:00.131: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 46 lines ...
Jul  7 08:07:41.794: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jul  7 08:07:42.470: INFO: Waiting up to 5m0s for pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68" in namespace "downward-api-4797" to be "Succeeded or Failed"
Jul  7 08:07:42.544: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 73.27256ms
Jul  7 08:07:44.557: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086525677s
Jul  7 08:07:46.616: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145398066s
Jul  7 08:07:48.800: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.329324023s
Jul  7 08:07:50.907: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 8.4361616s
Jul  7 08:07:53.017: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 10.546810841s
... skipping 30 lines ...
Jul  7 08:08:58.174: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.703562368s
Jul  7 08:09:00.209: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.738711043s
Jul  7 08:09:02.243: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.772552261s
Jul  7 08:09:04.360: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.889987109s
Jul  7 08:09:06.430: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m23.959753129s
STEP: Saw pod success
Jul  7 08:09:06.430: INFO: Pod "downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68" satisfied condition "Succeeded or Failed"
Jul  7 08:09:06.444: INFO: Trying to get logs from node kind-worker2 pod downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68 container dapi-container: <nil>
STEP: delete the pod
Jul  7 08:09:06.983: INFO: Waiting for pod downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68 to disappear
Jul  7 08:09:07.078: INFO: Pod downward-api-028e7563-9e21-4c41-9c6a-680d9951cb68 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:85.544 seconds]
[sig-node] Downward API
test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:09:00.153: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename runtimeclass
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
  test/e2e/common/runtimeclass.go:55
Jul  7 08:09:00.608: INFO: Waiting up to 5m0s for pod "test-runtimeclass-runtimeclass-3991-preconfigured-handler-n4gt9" in namespace "runtimeclass-3991" to be "Succeeded or Failed"
Jul  7 08:09:00.729: INFO: Pod "test-runtimeclass-runtimeclass-3991-preconfigured-handler-n4gt9": Phase="Pending", Reason="", readiness=false. Elapsed: 120.818073ms
Jul  7 08:09:02.814: INFO: Pod "test-runtimeclass-runtimeclass-3991-preconfigured-handler-n4gt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206214458s
Jul  7 08:09:04.883: INFO: Pod "test-runtimeclass-runtimeclass-3991-preconfigured-handler-n4gt9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274831199s
Jul  7 08:09:07.073: INFO: Pod "test-runtimeclass-runtimeclass-3991-preconfigured-handler-n4gt9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.465221681s
Jul  7 08:09:09.226: INFO: Pod "test-runtimeclass-runtimeclass-3991-preconfigured-handler-n4gt9": Phase="Running", Reason="", readiness=true. Elapsed: 8.617312138s
Jul  7 08:09:11.250: INFO: Pod "test-runtimeclass-runtimeclass-3991-preconfigured-handler-n4gt9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.641398111s
STEP: Saw pod success
Jul  7 08:09:11.250: INFO: Pod "test-runtimeclass-runtimeclass-3991-preconfigured-handler-n4gt9" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:175
Jul  7 08:09:11.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-3991" for this suite.


• [SLOW TEST:11.256 seconds]
[sig-node] RuntimeClass
test/e2e/common/runtimeclass.go:39
  should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
  test/e2e/common/runtimeclass.go:55
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]","total":-1,"completed":2,"skipped":14,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:11.411: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:175

... skipping 59 lines ...
• [SLOW TEST:84.654 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:592
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:12.600: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/framework/framework.go:175

... skipping 43 lines ...
Jul  7 08:08:59.893: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jul  7 08:09:00.351: INFO: Waiting up to 5m0s for pod "downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c" in namespace "downward-api-9206" to be "Succeeded or Failed"
Jul  7 08:09:00.434: INFO: Pod "downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 83.054905ms
Jul  7 08:09:02.443: INFO: Pod "downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091722512s
Jul  7 08:09:04.498: INFO: Pod "downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146865849s
Jul  7 08:09:06.534: INFO: Pod "downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182883389s
Jul  7 08:09:08.615: INFO: Pod "downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.2634285s
Jul  7 08:09:11.002: INFO: Pod "downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c": Phase="Running", Reason="", readiness=true. Elapsed: 10.65115559s
Jul  7 08:09:13.112: INFO: Pod "downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.760860472s
STEP: Saw pod success
Jul  7 08:09:13.112: INFO: Pod "downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c" satisfied condition "Succeeded or Failed"
Jul  7 08:09:13.176: INFO: Trying to get logs from node kind-worker pod downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c container dapi-container: <nil>
STEP: delete the pod
Jul  7 08:09:13.563: INFO: Waiting for pod downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c to disappear
Jul  7 08:09:13.641: INFO: Pod downward-api-a00c901b-4c99-44bc-b4c9-66946e744e0c no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:13.869 seconds]
[sig-node] Downward API
test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:13.766: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
... skipping 44 lines ...
Jul  7 08:08:38.594: INFO: PersistentVolumeClaim pvc-4c4pf found but phase is Pending instead of Bound.
Jul  7 08:08:40.641: INFO: PersistentVolumeClaim pvc-4c4pf found and phase=Bound (14.408970111s)
Jul  7 08:08:40.641: INFO: Waiting up to 3m0s for PersistentVolume local-xwh5m to have phase Bound
Jul  7 08:08:40.687: INFO: PersistentVolume local-xwh5m found and phase=Bound (46.19462ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wpmq
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 08:08:40.946: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wpmq" in namespace "provisioning-7343" to be "Succeeded or Failed"
Jul  7 08:08:41.054: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Pending", Reason="", readiness=false. Elapsed: 92.486939ms
Jul  7 08:08:43.109: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147950862s
Jul  7 08:08:45.151: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189810854s
Jul  7 08:08:47.255: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.293636954s
Jul  7 08:08:49.394: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.433072737s
Jul  7 08:08:51.417: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.455876459s
... skipping 8 lines ...
Jul  7 08:09:11.003: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 30.041739138s
Jul  7 08:09:13.104: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 32.142642282s
Jul  7 08:09:15.159: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 34.198072656s
Jul  7 08:09:17.208: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Running", Reason="", readiness=true. Elapsed: 36.246181247s
Jul  7 08:09:19.514: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.552671869s
STEP: Saw pod success
Jul  7 08:09:19.514: INFO: Pod "pod-subpath-test-preprovisionedpv-wpmq" satisfied condition "Succeeded or Failed"
Jul  7 08:09:19.632: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-preprovisionedpv-wpmq container test-container-subpath-preprovisionedpv-wpmq: <nil>
STEP: delete the pod
Jul  7 08:09:19.875: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wpmq to disappear
Jul  7 08:09:19.915: INFO: Pod pod-subpath-test-preprovisionedpv-wpmq no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wpmq
Jul  7 08:09:19.915: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wpmq" in namespace "provisioning-7343"
... skipping 19 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:226
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":10,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:22.247: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 71 lines ...
Jul  7 08:08:12.114: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support file as subpath [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:226
Jul  7 08:08:12.591: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  7 08:08:12.855: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6751" in namespace "provisioning-6751" to be "Succeeded or Failed"
Jul  7 08:08:12.914: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 58.446434ms
Jul  7 08:08:14.930: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075158675s
Jul  7 08:08:17.309: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453767405s
Jul  7 08:08:19.439: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 6.584315421s
Jul  7 08:08:21.535: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 8.679895951s
Jul  7 08:08:23.561: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 10.705708341s
Jul  7 08:08:25.574: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Running", Reason="", readiness=true. Elapsed: 12.719142426s
Jul  7 08:08:27.602: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.747003181s
STEP: Saw pod success
Jul  7 08:08:27.602: INFO: Pod "hostpath-symlink-prep-provisioning-6751" satisfied condition "Succeeded or Failed"
Jul  7 08:08:27.602: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6751" in namespace "provisioning-6751"
Jul  7 08:08:27.715: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6751" to be fully deleted
Jul  7 08:08:27.722: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-52dv
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 08:08:27.765: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-52dv" in namespace "provisioning-6751" to be "Succeeded or Failed"
Jul  7 08:08:27.800: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Pending", Reason="", readiness=false. Elapsed: 34.932296ms
Jul  7 08:08:29.862: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096941414s
Jul  7 08:08:31.944: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178708541s
Jul  7 08:08:33.979: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.21377535s
Jul  7 08:08:36.035: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.26971801s
Jul  7 08:08:38.134: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36886598s
... skipping 10 lines ...
Jul  7 08:09:00.794: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Running", Reason="", readiness=true. Elapsed: 33.028734791s
Jul  7 08:09:02.882: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Running", Reason="", readiness=true. Elapsed: 35.11698113s
Jul  7 08:09:04.945: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Running", Reason="", readiness=true. Elapsed: 37.179627014s
Jul  7 08:09:07.097: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Running", Reason="", readiness=true. Elapsed: 39.332433119s
Jul  7 08:09:09.214: INFO: Pod "pod-subpath-test-inlinevolume-52dv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 41.448985332s
STEP: Saw pod success
Jul  7 08:09:09.214: INFO: Pod "pod-subpath-test-inlinevolume-52dv" satisfied condition "Succeeded or Failed"
Jul  7 08:09:09.298: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-inlinevolume-52dv container test-container-subpath-inlinevolume-52dv: <nil>
STEP: delete the pod
Jul  7 08:09:09.437: INFO: Waiting for pod pod-subpath-test-inlinevolume-52dv to disappear
Jul  7 08:09:09.490: INFO: Pod pod-subpath-test-inlinevolume-52dv no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-52dv
Jul  7 08:09:09.490: INFO: Deleting pod "pod-subpath-test-inlinevolume-52dv" in namespace "provisioning-6751"
STEP: Deleting pod
Jul  7 08:09:09.684: INFO: Deleting pod "pod-subpath-test-inlinevolume-52dv" in namespace "provisioning-6751"
Jul  7 08:09:09.803: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-6751" in namespace "provisioning-6751" to be "Succeeded or Failed"
Jul  7 08:09:09.937: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 134.50306ms
Jul  7 08:09:11.962: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158800519s
Jul  7 08:09:14.078: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274820454s
Jul  7 08:09:16.284: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481672268s
Jul  7 08:09:18.314: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Pending", Reason="", readiness=false. Elapsed: 8.511696071s
Jul  7 08:09:20.355: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Running", Reason="", readiness=true. Elapsed: 10.552690105s
Jul  7 08:09:22.445: INFO: Pod "hostpath-symlink-prep-provisioning-6751": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.641896046s
STEP: Saw pod success
Jul  7 08:09:22.445: INFO: Pod "hostpath-symlink-prep-provisioning-6751" satisfied condition "Succeeded or Failed"
Jul  7 08:09:22.445: INFO: Deleting pod "hostpath-symlink-prep-provisioning-6751" in namespace "provisioning-6751"
Jul  7 08:09:22.566: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-6751" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jul  7 08:09:22.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-6751" for this suite.
... skipping 6 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:226
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":2,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:22.827: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/framework/framework.go:175

... skipping 2 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
... skipping 137 lines ...
Jul  7 08:07:30.415: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename cronjob
Jul  7 08:07:30.810: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  test/e2e/apps/cronjob.go:58
[It] should delete failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:247
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-2080" for this suite.


• [SLOW TEST:113.254 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:247
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:23.675: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/framework/framework.go:175

... skipping 24 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:09:24.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-859" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:24.600: INFO: Driver csi-hostpath doesn't support InlineVolume -- skipping
... skipping 125 lines ...
      Driver local doesn't support ext4 -- skipping

      test/e2e/storage/testsuites/base.go:185
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":1,"skipped":16,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:08:42.313: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 16 lines ...
Jul  7 08:08:54.343: INFO: PersistentVolumeClaim pvc-4qmn5 found but phase is Pending instead of Bound.
Jul  7 08:08:56.371: INFO: PersistentVolumeClaim pvc-4qmn5 found and phase=Bound (4.152436899s)
Jul  7 08:08:56.371: INFO: Waiting up to 3m0s for PersistentVolume local-gf2hb to have phase Bound
Jul  7 08:08:56.380: INFO: PersistentVolume local-gf2hb found and phase=Bound (8.622427ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-m6kw
STEP: Creating a pod to test subpath
Jul  7 08:08:56.424: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-m6kw" in namespace "provisioning-6240" to be "Succeeded or Failed"
Jul  7 08:08:56.474: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Pending", Reason="", readiness=false. Elapsed: 49.465582ms
Jul  7 08:08:58.634: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.209309195s
Jul  7 08:09:00.731: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30670608s
Jul  7 08:09:02.811: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386390845s
Jul  7 08:09:04.879: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.454263941s
Jul  7 08:09:07.088: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.663740514s
... skipping 2 lines ...
Jul  7 08:09:13.452: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Pending", Reason="", readiness=false. Elapsed: 17.027693873s
Jul  7 08:09:15.528: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Pending", Reason="", readiness=false. Elapsed: 19.104066734s
Jul  7 08:09:17.606: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Running", Reason="", readiness=false. Elapsed: 21.181634205s
Jul  7 08:09:19.639: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Running", Reason="", readiness=false. Elapsed: 23.215039961s
Jul  7 08:09:21.750: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.326042228s
STEP: Saw pod success
Jul  7 08:09:21.750: INFO: Pod "pod-subpath-test-preprovisionedpv-m6kw" satisfied condition "Succeeded or Failed"
Jul  7 08:09:21.957: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-preprovisionedpv-m6kw container test-container-volume-preprovisionedpv-m6kw: <nil>
STEP: delete the pod
Jul  7 08:09:22.460: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-m6kw to disappear
Jul  7 08:09:22.489: INFO: Pod pod-subpath-test-preprovisionedpv-m6kw no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-m6kw
Jul  7 08:09:22.489: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-m6kw" in namespace "provisioning-6240"
... skipping 24 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:190
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path","total":-1,"completed":2,"skipped":16,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:09:13.769: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:110
STEP: Creating configMap with name projected-configmap-test-volume-map-1c3a618d-69c2-40fe-8901-0d43e01a9549
STEP: Creating a pod to test consume configMaps
Jul  7 08:09:14.117: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1" in namespace "projected-3006" to be "Succeeded or Failed"
Jul  7 08:09:14.182: INFO: Pod "pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 39.514186ms
Jul  7 08:09:16.280: INFO: Pod "pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13708078s
Jul  7 08:09:18.318: INFO: Pod "pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.175636865s
Jul  7 08:09:20.357: INFO: Pod "pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.214109455s
Jul  7 08:09:22.445: INFO: Pod "pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1": Phase="Running", Reason="", readiness=true. Elapsed: 8.301683939s
Jul  7 08:09:24.559: INFO: Pod "pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1": Phase="Running", Reason="", readiness=true. Elapsed: 10.41641808s
Jul  7 08:09:26.578: INFO: Pod "pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.434804598s
STEP: Saw pod success
Jul  7 08:09:26.578: INFO: Pod "pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1" satisfied condition "Succeeded or Failed"
Jul  7 08:09:26.594: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:09:27.572: INFO: Waiting for pod pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1 to disappear
Jul  7 08:09:27.589: INFO: Pod pod-projected-configmaps-8bee6504-bc56-498c-9605-1cfb101a7ca1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:14.002 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_configmap.go:110
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":3,"skipped":6,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:08:13.156: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0707 08:08:24.421178   13076 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  7 08:09:27.108: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Jul  7 08:09:27.108: INFO: Deleting pod "simpletest-rc-to-be-deleted-6682t" in namespace "gc-4752"
Jul  7 08:09:27.576: INFO: Deleting pod "simpletest-rc-to-be-deleted-7459s" in namespace "gc-4752"
Jul  7 08:09:27.826: INFO: Deleting pod "simpletest-rc-to-be-deleted-8nq5b" in namespace "gc-4752"
Jul  7 08:09:28.129: INFO: Deleting pod "simpletest-rc-to-be-deleted-9vlqq" in namespace "gc-4752"
Jul  7 08:09:28.375: INFO: Deleting pod "simpletest-rc-to-be-deleted-9x4nm" in namespace "gc-4752"
[AfterEach] [sig-api-machinery] Garbage collector
... skipping 5 lines ...
• [SLOW TEST:75.589 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:09:28.749: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 7 lines ...
STEP: Destroying namespace "services-3766" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:735

•
------------------------------
{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":-1,"completed":3,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:13.870 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":3,"skipped":39,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:36.838: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:175

... skipping 80 lines ...
• [SLOW TEST:15.606 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:42
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":3,"skipped":18,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:42.945: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Jul  7 08:09:27.780: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  7 08:09:28.367: INFO: Waiting up to 5m0s for pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676" in namespace "emptydir-1281" to be "Succeeded or Failed"
Jul  7 08:09:28.402: INFO: Pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676": Phase="Pending", Reason="", readiness=false. Elapsed: 34.996291ms
Jul  7 08:09:30.474: INFO: Pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106577264s
Jul  7 08:09:32.592: INFO: Pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224666363s
Jul  7 08:09:34.625: INFO: Pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676": Phase="Pending", Reason="", readiness=false. Elapsed: 6.25764866s
Jul  7 08:09:36.681: INFO: Pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313784551s
Jul  7 08:09:38.713: INFO: Pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676": Phase="Running", Reason="", readiness=true. Elapsed: 10.345746843s
Jul  7 08:09:40.850: INFO: Pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676": Phase="Running", Reason="", readiness=true. Elapsed: 12.482473613s
Jul  7 08:09:42.864: INFO: Pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.496892441s
STEP: Saw pod success
Jul  7 08:09:42.864: INFO: Pod "pod-343963f1-5ca9-49b5-a230-3eb4f59b9676" satisfied condition "Succeeded or Failed"
Jul  7 08:09:42.867: INFO: Trying to get logs from node kind-worker pod pod-343963f1-5ca9-49b5-a230-3eb4f59b9676 container test-container: <nil>
STEP: delete the pod
Jul  7 08:09:42.933: INFO: Waiting for pod pod-343963f1-5ca9-49b5-a230-3eb4f59b9676 to disappear
Jul  7 08:09:42.982: INFO: Pod pod-343963f1-5ca9-49b5-a230-3eb4f59b9676 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:15.287 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":9,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:43.077: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 71 lines ...
STEP: waiting for the service to expose an endpoint
STEP: waiting up to 3m0s for service hairpin-test in namespace services-1911 to expose endpoints map[hairpin:[8080]]
Jul  7 08:09:35.138: INFO: successfully validated that service hairpin-test in namespace services-1911 exposes endpoints map[hairpin:[8080]]
STEP: Checking if the pod can reach itself
Jul  7 08:09:36.138: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-1911 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Jul  7 08:09:39.107: INFO: rc: 1
Jul  7 08:09:39.107: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-1911 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ nc -zv -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  7 08:09:40.110: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-1911 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Jul  7 08:09:43.432: INFO: rc: 1
Jul  7 08:09:43.432: INFO: Service reachability failing with error: error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-1911 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080:
Command stdout:

stderr:
+ nc -zv -t -w 2 hairpin-test 8080
nc: connect to hairpin-test port 8080 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jul  7 08:09:44.109: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-1911 hairpin -- /bin/sh -x -c nc -zv -t -w 2 hairpin-test 8080'
Jul  7 08:09:46.596: INFO: stderr: "+ nc -zv -t -w 2 hairpin-test 8080\nConnection to hairpin-test 8080 port [tcp/http-alt] succeeded!\n"
Jul  7 08:09:46.596: INFO: stdout: ""
Jul  7 08:09:46.597: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-1911 hairpin -- /bin/sh -x -c nc -zv -t -w 2 10.104.4.227 8080'
... skipping 10 lines ...
• [SLOW TEST:25.605 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should allow pods to hairpin back to themselves through services
  test/e2e/network/service.go:982
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":3,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 84 lines ...
• [SLOW TEST:138.902 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:49.587: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: tmpfs]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
... skipping 49 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jul  7 08:09:43.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978" in namespace "downward-api-4354" to be "Succeeded or Failed"
Jul  7 08:09:43.201: INFO: Pod "downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978": Phase="Pending", Reason="", readiness=false. Elapsed: 24.829872ms
Jul  7 08:09:45.219: INFO: Pod "downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042447878s
Jul  7 08:09:47.237: INFO: Pod "downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060630333s
Jul  7 08:09:49.257: INFO: Pod "downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080210333s
Jul  7 08:09:51.266: INFO: Pod "downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089365327s
STEP: Saw pod success
Jul  7 08:09:51.266: INFO: Pod "downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978" satisfied condition "Succeeded or Failed"
Jul  7 08:09:51.285: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978 container client-container: <nil>
STEP: delete the pod
Jul  7 08:09:51.533: INFO: Waiting for pod downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978 to disappear
Jul  7 08:09:51.549: INFO: Pod downwardapi-volume-2e5ee377-4341-43af-bcfb-288bdbf87978 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.622 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:51.598: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 33 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:09:52.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2500" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":5,"skipped":34,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:52.715: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 102 lines ...
  test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:09:53.531: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:175

... skipping 84 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:126
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:345
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":2,"skipped":6,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-node] Downward API
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:09:54.956: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jul  7 08:09:55.246: INFO: Waiting up to 5m0s for pod "downward-api-087772ed-fb8b-4b74-adbd-d3cc5e44ecfb" in namespace "downward-api-6961" to be "Succeeded or Failed"
Jul  7 08:09:55.271: INFO: Pod "downward-api-087772ed-fb8b-4b74-adbd-d3cc5e44ecfb": Phase="Pending", Reason="", readiness=false. Elapsed: 24.950598ms
Jul  7 08:09:57.313: INFO: Pod "downward-api-087772ed-fb8b-4b74-adbd-d3cc5e44ecfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067075675s
Jul  7 08:09:59.469: INFO: Pod "downward-api-087772ed-fb8b-4b74-adbd-d3cc5e44ecfb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222420355s
Jul  7 08:10:01.517: INFO: Pod "downward-api-087772ed-fb8b-4b74-adbd-d3cc5e44ecfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.270547345s
STEP: Saw pod success
Jul  7 08:10:01.517: INFO: Pod "downward-api-087772ed-fb8b-4b74-adbd-d3cc5e44ecfb" satisfied condition "Succeeded or Failed"
Jul  7 08:10:01.556: INFO: Trying to get logs from node kind-worker pod downward-api-087772ed-fb8b-4b74-adbd-d3cc5e44ecfb container dapi-container: <nil>
STEP: delete the pod
Jul  7 08:10:01.661: INFO: Waiting for pod downward-api-087772ed-fb8b-4b74-adbd-d3cc5e44ecfb to disappear
Jul  7 08:10:01.711: INFO: Pod downward-api-087772ed-fb8b-4b74-adbd-d3cc5e44ecfb no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:6.798 seconds]
[sig-node] Downward API
test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":9,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:01.761: INFO: Only supported for providers [azure] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:175

... skipping 101 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:10:03.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1029" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:03.295: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:175

... skipping 124 lines ...
test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":48,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:10:09.722: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  7 08:10:10.107: INFO: Waiting up to 5m0s for pod "pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f" in namespace "emptydir-6904" to be "Succeeded or Failed"
Jul  7 08:10:10.122: INFO: Pod "pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.605012ms
Jul  7 08:10:12.128: INFO: Pod "pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020976846s
Jul  7 08:10:14.167: INFO: Pod "pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060113456s
Jul  7 08:10:16.204: INFO: Pod "pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096383627s
Jul  7 08:10:18.251: INFO: Pod "pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144155182s
Jul  7 08:10:20.281: INFO: Pod "pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.173817324s
STEP: Saw pod success
Jul  7 08:10:20.281: INFO: Pod "pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f" satisfied condition "Succeeded or Failed"
Jul  7 08:10:20.288: INFO: Trying to get logs from node kind-worker pod pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f container test-container: <nil>
STEP: delete the pod
Jul  7 08:10:20.469: INFO: Waiting for pod pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f to disappear
Jul  7 08:10:20.491: INFO: Pod pod-10caf9a6-e9e2-4ef4-85c6-ada8d4be9a7f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.878 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:42
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:20.634: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/framework/framework.go:175

... skipping 110 lines ...
  test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":6,"skipped":44,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:21.241: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 71 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jul  7 08:10:21.049: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33" in namespace "downward-api-1787" to be "Succeeded or Failed"
Jul  7 08:10:21.052: INFO: Pod "downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33": Phase="Pending", Reason="", readiness=false. Elapsed: 3.074409ms
Jul  7 08:10:23.066: INFO: Pod "downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017163978s
Jul  7 08:10:25.253: INFO: Pod "downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204037345s
Jul  7 08:10:27.275: INFO: Pod "downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.225772286s
Jul  7 08:10:29.308: INFO: Pod "downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.258965668s
STEP: Saw pod success
Jul  7 08:10:29.308: INFO: Pod "downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33" satisfied condition "Succeeded or Failed"
Jul  7 08:10:29.349: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33 container client-container: <nil>
STEP: delete the pod
Jul  7 08:10:29.401: INFO: Waiting for pod downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33 to disappear
Jul  7 08:10:29.422: INFO: Pod downwardapi-volume-19fddcca-b6c4-4ff7-b535-811a8030cd33 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:8.699 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:29.526: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 196 lines ...
test/e2e/storage/utils/framework.go:23
  When pod refers to non-existent ephemeral storage
  test/e2e/storage/ephemeral_volume.go:53
    should allow deletion of pod with invalid volume : configmap
    test/e2e/storage/ephemeral_volume.go:55
------------------------------
{"msg":"PASSED [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap","total":-1,"completed":4,"skipped":25,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:29.822: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:175

... skipping 17 lines ...
STEP: Creating a kubernetes client
Jul  7 08:10:29.594: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pod-disks
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Pod Disks
  test/e2e/storage/pd.go:75
[It] should be able to delete a non-existent PD without error
  test/e2e/storage/pd.go:448
Jul  7 08:10:29.814: INFO: Only supported for providers [gce] (not skeleton)
[AfterEach] [sig-storage] Pod Disks
  test/e2e/framework/framework.go:175
Jul  7 08:10:29.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-disks-1528" for this suite.


S [SKIPPING] [0.259 seconds]
[sig-storage] Pod Disks
test/e2e/storage/utils/framework.go:23
  should be able to delete a non-existent PD without error [It]
  test/e2e/storage/pd.go:448

  Only supported for providers [gce] (not skeleton)

  test/e2e/storage/pd.go:449
------------------------------
... skipping 22 lines ...
Jul  7 08:09:52.132: INFO: PersistentVolumeClaim pvc-lzvj4 found but phase is Pending instead of Bound.
Jul  7 08:09:54.194: INFO: PersistentVolumeClaim pvc-lzvj4 found and phase=Bound (4.113161137s)
Jul  7 08:09:54.194: INFO: Waiting up to 3m0s for PersistentVolume local-mcswf to have phase Bound
Jul  7 08:09:54.226: INFO: PersistentVolume local-mcswf found and phase=Bound (32.640528ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-4qlk
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 08:09:54.298: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-4qlk" in namespace "provisioning-9080" to be "Succeeded or Failed"
Jul  7 08:09:54.322: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Pending", Reason="", readiness=false. Elapsed: 23.675183ms
Jul  7 08:09:56.349: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051292101s
Jul  7 08:09:58.394: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096278646s
Jul  7 08:10:00.402: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103854702s
Jul  7 08:10:02.418: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119578176s
Jul  7 08:10:04.445: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.14709915s
... skipping 6 lines ...
Jul  7 08:10:18.615: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Running", Reason="", readiness=true. Elapsed: 24.316694015s
Jul  7 08:10:20.643: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Running", Reason="", readiness=true. Elapsed: 26.344534278s
Jul  7 08:10:22.664: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Running", Reason="", readiness=true. Elapsed: 28.365646929s
Jul  7 08:10:24.689: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Running", Reason="", readiness=true. Elapsed: 30.391090542s
Jul  7 08:10:26.751: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.452840875s
STEP: Saw pod success
Jul  7 08:10:26.751: INFO: Pod "pod-subpath-test-preprovisionedpv-4qlk" satisfied condition "Succeeded or Failed"
Jul  7 08:10:26.801: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-preprovisionedpv-4qlk container test-container-subpath-preprovisionedpv-4qlk: <nil>
STEP: delete the pod
Jul  7 08:10:27.034: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-4qlk to disappear
Jul  7 08:10:27.113: INFO: Pod pod-subpath-test-preprovisionedpv-4qlk no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-4qlk
Jul  7 08:10:27.113: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-4qlk" in namespace "provisioning-9080"
... skipping 24 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:226
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":5,"skipped":20,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] Mount propagation
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 63 lines ...
Jul  7 08:10:04.524: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:05.329: INFO: Exec stderr: ""
Jul  7 08:10:11.480: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-4976"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-4976"/host; echo host > "/var/lib/kubelet/mount-propagation-4976"/host/file] Namespace:mount-propagation-4976 PodName:hostexec-kind-worker2-ztxnp ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jul  7 08:10:11.480: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:12.152: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:12.152: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:12.764: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Jul  7 08:10:12.778: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:12.778: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:13.583: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:13.603: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:13.603: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:14.385: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:14.393: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:14.393: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:15.244: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:15.268: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4976 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:15.268: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:16.023: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Jul  7 08:10:16.034: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:16.035: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:16.867: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Jul  7 08:10:16.893: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:16.893: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:17.593: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Jul  7 08:10:17.669: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:17.669: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:18.397: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:18.403: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:18.403: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:19.130: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:19.142: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4976 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:19.142: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:19.965: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Jul  7 08:10:19.975: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:19.975: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:20.786: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:20.812: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:20.813: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:21.774: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:21.830: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:21.830: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:22.477: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Jul  7 08:10:22.513: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:22.513: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:23.255: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:23.262: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4976 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:23.262: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:24.038: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:24.045: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-4976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:24.045: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:24.895: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:24.927: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-4976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:24.927: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:25.830: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:25.878: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-4976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:25.878: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:26.811: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:26.830: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-4976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:26.830: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:27.538: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Jul  7 08:10:27.598: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-4976 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:10:27.604: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:28.545: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Jul  7 08:10:28.545: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-4976"/master/file` = master] Namespace:mount-propagation-4976 PodName:hostexec-kind-worker2-ztxnp ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jul  7 08:10:28.545: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:29.433: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-4976"/slave/file] Namespace:mount-propagation-4976 PodName:hostexec-kind-worker2-ztxnp ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jul  7 08:10:29.433: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:10:30.125: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-4976"/host] Namespace:mount-propagation-4976 PodName:hostexec-kind-worker2-ztxnp ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jul  7 08:10:30.125: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 21 lines ...
• [SLOW TEST:82.614 seconds]
[k8s.io] [sig-node] Mount propagation
test/e2e/framework/framework.go:592
  should propagate mounts to the host
  test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":3,"skipped":14,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:35.230: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 66 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run with an explicit non-root user ID [LinuxOnly]
  test/e2e/common/security_context.go:124
Jul  7 08:10:30.186: INFO: Waiting up to 5m0s for pod "explicit-nonroot-uid" in namespace "security-context-test-5250" to be "Succeeded or Failed"
Jul  7 08:10:30.230: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 44.160928ms
Jul  7 08:10:32.295: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109550397s
Jul  7 08:10:34.318: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132001767s
Jul  7 08:10:36.339: INFO: Pod "explicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153207462s
Jul  7 08:10:38.363: INFO: Pod "explicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.177302801s
Jul  7 08:10:38.363: INFO: Pod "explicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jul  7 08:10:38.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5250" for this suite.


... skipping 2 lines ...
test/e2e/framework/framework.go:592
  When creating a container with runAsNonRoot
  test/e2e/common/security_context.go:99
    should run with an explicit non-root user ID [LinuxOnly]
    test/e2e/common/security_context.go:124
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]","total":-1,"completed":7,"skipped":89,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:38.530: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 48 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  test/e2e/framework/framework.go:175
Jul  7 08:10:38.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/json\"","total":-1,"completed":8,"skipped":95,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 4 lines ...
  test/e2e/framework/framework.go:597
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0707 08:09:40.976675   13076 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Jul  7 08:10:43.118: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:175
Jul  7 08:10:43.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8689" for this suite.


• [SLOW TEST:73.459 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":4,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:43.207: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 130 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:10:43.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-4887" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates","total":-1,"completed":5,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 59 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:162
    should function for node-Service: http
    test/e2e/network/networking.go:193
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: http","total":-1,"completed":5,"skipped":25,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:49.237: INFO: Only supported for providers [azure] (not skeleton)
... skipping 49 lines ...
  test/e2e/kubectl/kubectl.go:801
    should apply a new configuration to an existing RC
    test/e2e/kubectl/kubectl.go:802
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC","total":-1,"completed":6,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:49.302: INFO: Driver supports dynamic provisioning, skipping PreprovisionedPV pattern
... skipping 150 lines ...
Jul  7 08:10:37.139: INFO: Waiting for PV local-pvwjt6s to bind to PVC pvc-mq446
Jul  7 08:10:37.139: INFO: Waiting up to 3m0s for PersistentVolumeClaims [pvc-mq446] to have phase Bound
Jul  7 08:10:37.174: INFO: PersistentVolumeClaim pvc-mq446 found but phase is Pending instead of Bound.
Jul  7 08:10:39.184: INFO: PersistentVolumeClaim pvc-mq446 found and phase=Bound (2.044682789s)
Jul  7 08:10:39.184: INFO: Waiting up to 3m0s for PersistentVolume local-pvwjt6s to have phase Bound
Jul  7 08:10:39.204: INFO: PersistentVolume local-pvwjt6s found and phase=Bound (20.198571ms)
[It] should fail scheduling due to different NodeSelector
  test/e2e/storage/persistent_volumes-local.go:365
STEP: local-volume-type: dir
STEP: Initializing test volumes
Jul  7 08:10:39.246: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir -p /tmp/local-volume-test-7718646b-1010-4899-9fe1-d7be3422d332] Namespace:persistent-local-volumes-test-8129 PodName:hostexec-kind-worker-br59q ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true}
Jul  7 08:10:39.246: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Creating local PVCs and PVs
... skipping 30 lines ...

• [SLOW TEST:28.737 seconds]
[sig-storage] PersistentVolumes-local 
test/e2e/storage/utils/framework.go:23
  Pod with node different from PV's NodeAffinity
  test/e2e/storage/persistent_volumes-local.go:339
    should fail scheduling due to different NodeSelector
    test/e2e/storage/persistent_volumes-local.go:365
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector","total":-1,"completed":5,"skipped":26,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:58.572: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 246 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:162
    should function for endpoint-Service: udp
    test/e2e/network/networking.go:220
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:10:59.539: INFO: Distro debian doesn't support ntfs -- skipping
... skipping 158 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating secret secrets-5556/secret-test-5ce3aa3a-9a45-41b9-bebb-37836d39c57a
STEP: Creating a pod to test consume secrets
Jul  7 08:10:59.181: INFO: Waiting up to 5m0s for pod "pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06" in namespace "secrets-5556" to be "Succeeded or Failed"
Jul  7 08:10:59.211: INFO: Pod "pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06": Phase="Pending", Reason="", readiness=false. Elapsed: 29.941351ms
Jul  7 08:11:01.238: INFO: Pod "pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057617541s
Jul  7 08:11:03.253: INFO: Pod "pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072177158s
Jul  7 08:11:05.438: INFO: Pod "pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.257570469s
Jul  7 08:11:07.573: INFO: Pod "pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06": Phase="Running", Reason="", readiness=true. Elapsed: 8.392366546s
Jul  7 08:11:09.593: INFO: Pod "pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.411995295s
STEP: Saw pod success
Jul  7 08:11:09.593: INFO: Pod "pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06" satisfied condition "Succeeded or Failed"
Jul  7 08:11:09.612: INFO: Trying to get logs from node kind-worker pod pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06 container env-test: <nil>
STEP: delete the pod
Jul  7 08:11:09.829: INFO: Waiting for pod pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06 to disappear
Jul  7 08:11:09.839: INFO: Pod pod-configmaps-8aafdf46-15ed-4600-b0ce-911c5e75ac06 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:10.878 seconds]
[sig-api-machinery] Secrets
test/e2e/common/secrets.go:35
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":54,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:11:09.908: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 58 lines ...
      Driver local doesn't support ext3 -- skipping

      test/e2e/storage/testsuites/base.go:185
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":2,"skipped":11,"failed":0}
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:11:09.887: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename certificates
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 46 lines ...
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 08:10:42.803: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
Jul  7 08:10:42.826: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:11:13.598: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-3820-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8660.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
... skipping 6 lines ...
• [SLOW TEST:45.918 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":3,"skipped":11,"failed":0}
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:11:12.195: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 82 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:126
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:345
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":2,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:11:31.354: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 190 lines ...
      Only supported for providers [gce gke] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1264
------------------------------
SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":6,"skipped":21,"failed":0}
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:11:16.558: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Jul  7 08:11:16.860: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jul  7 08:11:31.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1309" for this suite.


• [SLOW TEST:15.438 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:592
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":7,"skipped":21,"failed":0}

SS
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:11:21.145: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  test/e2e/kubectl/kubectl.go:998
    should create/apply a CR with unknown fields for CRD with no validation schema
    test/e2e/kubectl/kubectl.go:999
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema","total":-1,"completed":5,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:11:39.945: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 49 lines ...
Jul  7 08:08:37.507: INFO: PersistentVolumeClaim pvc-ml2gr found but phase is Pending instead of Bound.
Jul  7 08:08:39.725: INFO: PersistentVolumeClaim pvc-ml2gr found and phase=Bound (14.688300998s)
Jul  7 08:08:39.725: INFO: Waiting up to 3m0s for PersistentVolume local-lcrkp to have phase Bound
Jul  7 08:08:39.882: INFO: PersistentVolume local-lcrkp found and phase=Bound (156.72107ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-n8m9
STEP: Creating a pod to test subpath
Jul  7 08:08:39.988: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-n8m9" in namespace "provisioning-7679" to be "Succeeded or Failed"
Jul  7 08:08:40.078: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Pending", Reason="", readiness=false. Elapsed: 90.226021ms
Jul  7 08:08:42.125: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136967725s
Jul  7 08:08:44.151: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162910306s
Jul  7 08:08:46.165: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177777163s
Jul  7 08:08:48.207: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.219138698s
Jul  7 08:08:50.339: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.351668081s
... skipping 76 lines ...
Jul  7 08:11:29.074: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Pending", Reason="", readiness=false. Elapsed: 2m49.086440893s
Jul  7 08:11:31.151: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Pending", Reason="", readiness=false. Elapsed: 2m51.162975562s
Jul  7 08:11:33.177: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Pending", Reason="", readiness=false. Elapsed: 2m53.189349424s
Jul  7 08:11:35.226: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Running", Reason="", readiness=true. Elapsed: 2m55.238349894s
Jul  7 08:11:37.235: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m57.247288152s
STEP: Saw pod success
Jul  7 08:11:37.235: INFO: Pod "pod-subpath-test-preprovisionedpv-n8m9" satisfied condition "Succeeded or Failed"
Jul  7 08:11:37.273: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-preprovisionedpv-n8m9 container test-container-subpath-preprovisionedpv-n8m9: <nil>
STEP: delete the pod
Jul  7 08:11:37.510: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-n8m9 to disappear
Jul  7 08:11:37.569: INFO: Pod pod-subpath-test-preprovisionedpv-n8m9 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-n8m9
Jul  7 08:11:37.569: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-n8m9" in namespace "provisioning-7679"
... skipping 26 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:375
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":2,"skipped":8,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:11:43.871: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 84 lines ...
Jul  7 08:11:18.277: INFO: PersistentVolumeClaim pvc-lm6jt found but phase is Pending instead of Bound.
Jul  7 08:11:20.318: INFO: PersistentVolumeClaim pvc-lm6jt found and phase=Bound (2.072764384s)
Jul  7 08:11:20.318: INFO: Waiting up to 3m0s for PersistentVolume local-x8ksj to have phase Bound
Jul  7 08:11:20.325: INFO: PersistentVolume local-x8ksj found and phase=Bound (6.495089ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-rffp
STEP: Creating a pod to test subpath
Jul  7 08:11:20.386: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-rffp" in namespace "provisioning-1648" to be "Succeeded or Failed"
Jul  7 08:11:20.446: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Pending", Reason="", readiness=false. Elapsed: 59.454224ms
Jul  7 08:11:22.468: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082181258s
Jul  7 08:11:24.500: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114322162s
Jul  7 08:11:26.554: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167933286s
Jul  7 08:11:28.582: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19581177s
Jul  7 08:11:30.622: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.236176019s
Jul  7 08:11:32.637: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.250476286s
Jul  7 08:11:34.659: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.27312739s
Jul  7 08:11:36.675: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Pending", Reason="", readiness=false. Elapsed: 16.288680449s
Jul  7 08:11:38.814: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Running", Reason="", readiness=true. Elapsed: 18.42820035s
Jul  7 08:11:41.321: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.93480538s
STEP: Saw pod success
Jul  7 08:11:41.321: INFO: Pod "pod-subpath-test-preprovisionedpv-rffp" satisfied condition "Succeeded or Failed"
Jul  7 08:11:41.476: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-preprovisionedpv-rffp container test-container-subpath-preprovisionedpv-rffp: <nil>
STEP: delete the pod
Jul  7 08:11:42.563: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-rffp to disappear
Jul  7 08:11:42.586: INFO: Pod pod-subpath-test-preprovisionedpv-rffp no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-rffp
Jul  7 08:11:42.586: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-rffp" in namespace "provisioning-1648"
... skipping 19 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":7,"skipped":63,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:11:44.221: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 80 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl label
  test/e2e/kubectl/kubectl.go:1325
    should update the label on a resource  [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":8,"skipped":23,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:11:51.170: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 38 lines ...
test/e2e/framework/framework.go:592
  when scheduling a busybox command in a pod
  test/e2e/common/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":68,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:11:56.835: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 167 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should store data
      test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":9,"skipped":101,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:01.728: INFO: Only supported for providers [vsphere] (not skeleton)
... skipping 136 lines ...
Jul  7 08:08:22.704: INFO: PersistentVolumeClaim pvc-8m2w6 found but phase is Pending instead of Bound.
Jul  7 08:08:24.855: INFO: PersistentVolumeClaim pvc-8m2w6 found and phase=Bound (12.625207753s)
Jul  7 08:08:24.855: INFO: Waiting up to 3m0s for PersistentVolume local-hn8zp to have phase Bound
Jul  7 08:08:24.927: INFO: PersistentVolume local-hn8zp found and phase=Bound (71.981654ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-zmm9
STEP: Creating a pod to test exec-volume-test
Jul  7 08:08:25.112: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-zmm9" in namespace "volume-7122" to be "Succeeded or Failed"
Jul  7 08:08:25.138: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.735592ms
Jul  7 08:08:27.268: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156319356s
Jul  7 08:08:29.350: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23800099s
Jul  7 08:08:31.393: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.281248267s
Jul  7 08:08:33.485: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.373915372s
Jul  7 08:08:35.567: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.455695018s
... skipping 94 lines ...
Jul  7 08:11:51.913: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.801096326s
Jul  7 08:11:53.963: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.850961832s
Jul  7 08:11:55.995: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.883420525s
Jul  7 08:11:58.028: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.916729263s
Jul  7 08:12:00.069: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3m34.957714722s
STEP: Saw pod success
Jul  7 08:12:00.069: INFO: Pod "exec-volume-test-preprovisionedpv-zmm9" satisfied condition "Succeeded or Failed"
Jul  7 08:12:00.098: INFO: Trying to get logs from node kind-worker pod exec-volume-test-preprovisionedpv-zmm9 container exec-container-preprovisionedpv-zmm9: <nil>
STEP: delete the pod
Jul  7 08:12:00.231: INFO: Waiting for pod exec-volume-test-preprovisionedpv-zmm9 to disappear
Jul  7 08:12:00.249: INFO: Pod exec-volume-test-preprovisionedpv-zmm9 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-zmm9
Jul  7 08:12:00.249: INFO: Deleting pod "exec-volume-test-preprovisionedpv-zmm9" in namespace "volume-7122"
... skipping 17 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":1,"skipped":7,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] NodeLease
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 10 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:12:02.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-278" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":2,"skipped":11,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:02.429: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 53 lines ...
• [SLOW TEST:13.779 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":9,"skipped":27,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 196 lines ...
• [SLOW TEST:273.659 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  iterative rollouts should eventually progress
  test/e2e/apps/deployment.go:116
------------------------------
{"msg":"PASSED [sig-apps] Deployment iterative rollouts should eventually progress","total":-1,"completed":1,"skipped":14,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:07.192: INFO: Driver nfs doesn't support ext3 -- skipping
... skipping 106 lines ...
• [SLOW TEST:10.679 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:592
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:07.534: INFO: Only supported for providers [aws] (not skeleton)
... skipping 14 lines ...
      Only supported for providers [aws] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1677
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:08:19.886: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 54 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:126
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:345
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":2,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:12:02.440: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in volume subpath
Jul  7 08:12:02.759: INFO: Waiting up to 5m0s for pod "var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7" in namespace "var-expansion-9514" to be "Succeeded or Failed"
Jul  7 08:12:02.791: INFO: Pod "var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.803344ms
Jul  7 08:12:04.819: INFO: Pod "var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059778656s
Jul  7 08:12:06.837: INFO: Pod "var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078082205s
Jul  7 08:12:08.944: INFO: Pod "var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.184716682s
Jul  7 08:12:10.977: INFO: Pod "var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217549531s
Jul  7 08:12:13.015: INFO: Pod "var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.255647191s
STEP: Saw pod success
Jul  7 08:12:13.015: INFO: Pod "var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7" satisfied condition "Succeeded or Failed"
Jul  7 08:12:13.069: INFO: Trying to get logs from node kind-worker2 pod var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7 container dapi-container: <nil>
STEP: delete the pod
Jul  7 08:12:13.639: INFO: Waiting for pod var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7 to disappear
Jul  7 08:12:13.661: INFO: Pod var-expansion-c1918141-7f4d-4531-afa5-77ab1c3adba7 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:11.352 seconds]
[k8s.io] Variable Expansion
test/e2e/framework/framework.go:592
  should allow substituting values in a volume subpath [sig-storage] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:13.816: INFO: Driver local doesn't support ntfs -- skipping
... skipping 103 lines ...
• [SLOW TEST:150.613 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: too few pods, absolute => should not allow an eviction
  test/e2e/apps/disruption.go:222
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: too few pods, absolute =\u003e should not allow an eviction","total":-1,"completed":4,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 28 lines ...
Jul  7 08:12:07.538: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
  test/e2e/node/security_context.go:118
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jul  7 08:12:07.906: INFO: Waiting up to 5m0s for pod "security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa" in namespace "security-context-7873" to be "Succeeded or Failed"
Jul  7 08:12:07.942: INFO: Pod "security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa": Phase="Pending", Reason="", readiness=false. Elapsed: 35.949811ms
Jul  7 08:12:10.038: INFO: Pod "security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132140834s
Jul  7 08:12:12.065: INFO: Pod "security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159034048s
Jul  7 08:12:14.110: INFO: Pod "security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204009679s
Jul  7 08:12:16.152: INFO: Pod "security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.245538969s
Jul  7 08:12:18.245: INFO: Pod "security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa": Phase="Running", Reason="", readiness=true. Elapsed: 10.339015313s
Jul  7 08:12:20.443: INFO: Pod "security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.536648483s
STEP: Saw pod success
Jul  7 08:12:20.443: INFO: Pod "security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa" satisfied condition "Succeeded or Failed"
Jul  7 08:12:20.478: INFO: Trying to get logs from node kind-worker2 pod security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa container test-container: <nil>
STEP: delete the pod
Jul  7 08:12:21.205: INFO: Waiting for pod security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa to disappear
Jul  7 08:12:21.238: INFO: Pod security-context-ea1e36ec-2cb3-415a-ada7-ef99ace026aa no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:175
... skipping 178 lines ...
Jul  7 08:12:06.723: INFO: Waiting for amount of service:node-port-service endpoints to be 2
STEP: Waiting for Session Affinity service to expose endpoint
Jul  7 08:12:07.744: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2
STEP: dialing(udp) 172.18.0.4 (node) --> 10.103.8.96:90 (config.clusterIP)
Jul  7 08:12:08.051: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.103.8.96 90 | grep -v '^\s*$'] Namespace:nettest-4514 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:12:08.051: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:12:10.594: INFO: Failed to execute "echo hostName | nc -w 1 -u 10.103.8.96 90 | grep -v '^\\s*$'": command terminated with exit code 1, stdout: "", stderr: ""
Jul  7 08:12:10.594: INFO: Waiting for [netserver-0 netserver-1] endpoints (expected=[netserver-0 netserver-1], actual=[])
Jul  7 08:12:12.625: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.103.8.96 90 | grep -v '^\s*$'] Namespace:nettest-4514 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:12:12.625: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:12:15.589: INFO: Waiting for [netserver-0] endpoints (expected=[netserver-0 netserver-1], actual=[netserver-1])
Jul  7 08:12:17.621: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.103.8.96 90 | grep -v '^\s*$'] Namespace:nettest-4514 PodName:host-test-container-pod ContainerName:agnhost Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  7 08:12:17.621: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 16 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:162
    should function for node-Service: udp
    test/e2e/network/networking.go:202
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for node-Service: udp","total":-1,"completed":3,"skipped":34,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:27.350: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 102 lines ...
      test/e2e/storage/testsuites/volume_expand.go:162

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":10,"skipped":80,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:12:21.400: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 24 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl run pod
  test/e2e/kubectl/kubectl.go:1536
    should create a pod from an image when restart is Never  [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":-1,"completed":11,"skipped":80,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 5 lines ...
[It] should support readOnly directory specified in the volumeMount
  test/e2e/storage/testsuites/subpath.go:360
Jul  7 08:12:05.227: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  7 08:12:05.227: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-jcpc
STEP: Creating a pod to test subpath
Jul  7 08:12:05.545: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-jcpc" in namespace "provisioning-2858" to be "Succeeded or Failed"
Jul  7 08:12:05.674: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 128.537586ms
Jul  7 08:12:07.894: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.349478042s
Jul  7 08:12:09.944: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.398798234s
Jul  7 08:12:12.001: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455854927s
Jul  7 08:12:14.110: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.565229101s
Jul  7 08:12:16.136: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.591384574s
... skipping 2 lines ...
Jul  7 08:12:22.595: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.049832189s
Jul  7 08:12:24.662: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 19.116794415s
Jul  7 08:12:26.709: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 21.164265779s
Jul  7 08:12:28.784: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.239509347s
Jul  7 08:12:30.868: INFO: Pod "pod-subpath-test-inlinevolume-jcpc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.322940862s
STEP: Saw pod success
Jul  7 08:12:30.868: INFO: Pod "pod-subpath-test-inlinevolume-jcpc" satisfied condition "Succeeded or Failed"
Jul  7 08:12:30.910: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-inlinevolume-jcpc container test-container-subpath-inlinevolume-jcpc: <nil>
STEP: delete the pod
Jul  7 08:12:31.059: INFO: Waiting for pod pod-subpath-test-inlinevolume-jcpc to disappear
Jul  7 08:12:31.077: INFO: Pod pod-subpath-test-inlinevolume-jcpc no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-jcpc
Jul  7 08:12:31.077: INFO: Deleting pod "pod-subpath-test-inlinevolume-jcpc" in namespace "provisioning-2858"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":10,"skipped":30,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:31.369: INFO: Driver emptydir doesn't support ntfs -- skipping
... skipping 198 lines ...
• [SLOW TEST:258.561 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:592
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

SSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data","total":-1,"completed":6,"skipped":46,"failed":0}
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:12:24.501: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test substitution in container's command
Jul  7 08:12:24.886: INFO: Waiting up to 5m0s for pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be" in namespace "var-expansion-5159" to be "Succeeded or Failed"
Jul  7 08:12:24.981: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be": Phase="Pending", Reason="", readiness=false. Elapsed: 95.035286ms
Jul  7 08:12:27.044: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157797485s
Jul  7 08:12:29.065: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178267751s
Jul  7 08:12:31.133: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247034199s
Jul  7 08:12:33.164: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.27810244s
Jul  7 08:12:35.208: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be": Phase="Running", Reason="", readiness=true. Elapsed: 10.32147996s
Jul  7 08:12:37.282: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be": Phase="Running", Reason="", readiness=true. Elapsed: 12.395559245s
Jul  7 08:12:39.339: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be": Phase="Running", Reason="", readiness=true. Elapsed: 14.452623371s
Jul  7 08:12:41.342: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.456098206s
STEP: Saw pod success
Jul  7 08:12:41.343: INFO: Pod "var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be" satisfied condition "Succeeded or Failed"
Jul  7 08:12:41.345: INFO: Trying to get logs from node kind-worker2 pod var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be container dapi-container: <nil>
STEP: delete the pod
Jul  7 08:12:41.714: INFO: Waiting for pod var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be to disappear
Jul  7 08:12:41.779: INFO: Pod var-expansion-2aed2c7c-9f4d-403f-a431-8c00297113be no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:17.339 seconds]
[k8s.io] Variable Expansion
test/e2e/framework/framework.go:592
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":46,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:41.869: INFO: Driver local doesn't support ext3 -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/framework/framework.go:175

... skipping 153 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should store data
      test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":3,"skipped":22,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:43.587: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175

... skipping 145 lines ...
Jul  7 08:08:52.425: INFO: PersistentVolumeClaim pvc-5kxlb found but phase is Pending instead of Bound.
Jul  7 08:08:54.514: INFO: PersistentVolumeClaim pvc-5kxlb found and phase=Bound (4.188771105s)
Jul  7 08:08:54.514: INFO: Waiting up to 3m0s for PersistentVolume local-hmb72 to have phase Bound
Jul  7 08:08:54.577: INFO: PersistentVolume local-hmb72 found and phase=Bound (63.378901ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-wkc6
STEP: Creating a pod to test subpath
Jul  7 08:08:54.771: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-wkc6" in namespace "provisioning-6213" to be "Succeeded or Failed"
Jul  7 08:08:54.828: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Pending", Reason="", readiness=false. Elapsed: 57.079904ms
Jul  7 08:08:56.847: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076200595s
Jul  7 08:08:58.920: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14861558s
Jul  7 08:09:00.961: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190381629s
Jul  7 08:09:02.971: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199797678s
Jul  7 08:09:05.214: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.443180601s
... skipping 99 lines ...
Jul  7 08:12:31.390: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Pending", Reason="", readiness=false. Elapsed: 3m36.618518457s
Jul  7 08:12:33.444: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Running", Reason="", readiness=true. Elapsed: 3m38.67346869s
Jul  7 08:12:35.457: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Running", Reason="", readiness=true. Elapsed: 3m40.685769519s
Jul  7 08:12:37.478: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Running", Reason="", readiness=true. Elapsed: 3m42.707451842s
Jul  7 08:12:39.539: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 3m44.767604539s
STEP: Saw pod success
Jul  7 08:12:39.539: INFO: Pod "pod-subpath-test-preprovisionedpv-wkc6" satisfied condition "Succeeded or Failed"
Jul  7 08:12:39.558: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-preprovisionedpv-wkc6 container test-container-subpath-preprovisionedpv-wkc6: <nil>
STEP: delete the pod
Jul  7 08:12:40.114: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-wkc6 to disappear
Jul  7 08:12:40.130: INFO: Pod pod-subpath-test-preprovisionedpv-wkc6 no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-wkc6
Jul  7 08:12:40.131: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-wkc6" in namespace "provisioning-6213"
... skipping 19 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:375
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":4,"skipped":17,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:43.786: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 76 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:126
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:345
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":3,"skipped":20,"failed":0}

SSSSSS
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":36,"failed":0}
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:12:19.633: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 36 lines ...
• [SLOW TEST:43.995 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:592
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":4,"skipped":30,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:12:57.848: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 282 lines ...
Jul  7 08:12:27.802: INFO: PersistentVolumeClaim csi-hostpathc729h found but phase is Pending instead of Bound.
Jul  7 08:12:29.869: INFO: PersistentVolumeClaim csi-hostpathc729h found but phase is Pending instead of Bound.
Jul  7 08:12:31.965: INFO: PersistentVolumeClaim csi-hostpathc729h found but phase is Pending instead of Bound.
Jul  7 08:12:34.091: INFO: PersistentVolumeClaim csi-hostpathc729h found but phase is Pending instead of Bound.
Jul  7 08:12:36.110: INFO: PersistentVolumeClaim csi-hostpathc729h found but phase is Pending instead of Bound.
Jul  7 08:12:38.117: INFO: PersistentVolumeClaim csi-hostpathc729h found but phase is Pending instead of Bound.
Jul  7 08:12:40.117: FAIL: Unexpected error:
    <*errors.errorString | 0xc0022e2ee0>: {
        s: "PersistentVolumeClaims [csi-hostpathc729h] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [csi-hostpathc729h] not all in phase Bound within 5m0s
occurred

... skipping 458 lines ...
  test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support file as subpath [LinuxOnly] [It]
      test/e2e/storage/testsuites/subpath.go:226

      Jul  7 08:12:40.117: Unexpected error:
          <*errors.errorString | 0xc0022e2ee0>: {
              s: "PersistentVolumeClaims [csi-hostpathc729h] not all in phase Bound within 5m0s",
          }
          PersistentVolumeClaims [csi-hostpathc729h] not all in phase Bound within 5m0s
      occurred

      test/e2e/storage/testsuites/base.go:441
------------------------------
{"msg":"FAILED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":0,"skipped":7,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}

SSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 67 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:390
    should contain last line of the log
    test/e2e/kubectl/kubectl.go:616
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":10,"skipped":127,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:13:07.287: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 122 lines ...
  test/e2e/storage/persistent_volumes-local.go:188
    Two pods mounting a local volume one after the other
    test/e2e/storage/persistent_volumes-local.go:250
      should be able to write from pod1 and read from pod2
      test/e2e/storage/persistent_volumes-local.go:251
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2","total":-1,"completed":5,"skipped":35,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:13:10.135: INFO: Only supported for providers [openstack] (not skeleton)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumes
  test/e2e/framework/framework.go:175

... skipping 344 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:390
    should support inline execution and attach
    test/e2e/kubectl/kubectl.go:561
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":3,"skipped":19,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Generated clientset
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 14 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:13:14.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-206" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:13:15.475: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/framework/framework.go:175

... skipping 119 lines ...
  test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":11,"skipped":49,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
... skipping 7 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: nfs]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver "nfs" does not support topology - skipping

      test/e2e/storage/testsuites/topology.go:96
------------------------------
SSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":3,"skipped":3,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:12:34.657: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl replace
  test/e2e/kubectl/kubectl.go:1572
    should update a single-container pod's image  [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":11,"skipped":136,"failed":0}
[BeforeEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:13:10.129: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 86 lines ...
  test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":5,"skipped":21,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:13:18.018: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name secret-test-7506122d-63ce-4376-94d3-df838818e4ac
STEP: Creating a pod to test consume secrets
Jul  7 08:13:18.511: INFO: Waiting up to 5m0s for pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32" in namespace "secrets-34" to be "Succeeded or Failed"
Jul  7 08:13:18.578: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32": Phase="Pending", Reason="", readiness=false. Elapsed: 66.633041ms
Jul  7 08:13:20.606: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094633261s
Jul  7 08:13:22.635: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123935732s
Jul  7 08:13:24.746: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23476113s
Jul  7 08:13:26.924: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32": Phase="Pending", Reason="", readiness=false. Elapsed: 8.412862719s
Jul  7 08:13:28.951: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32": Phase="Pending", Reason="", readiness=false. Elapsed: 10.440413796s
Jul  7 08:13:31.139: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32": Phase="Pending", Reason="", readiness=false. Elapsed: 12.627733596s
Jul  7 08:13:33.334: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32": Phase="Pending", Reason="", readiness=false. Elapsed: 14.823267856s
Jul  7 08:13:35.341: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.83045353s
STEP: Saw pod success
Jul  7 08:13:35.342: INFO: Pod "pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32" satisfied condition "Succeeded or Failed"
Jul  7 08:13:35.361: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32 container secret-env-test: <nil>
STEP: delete the pod
Jul  7 08:13:35.478: INFO: Waiting for pod pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32 to disappear
Jul  7 08:13:35.524: INFO: Pod pod-secrets-72ba8271-7b42-4135-bbb1-0f0938b05a32 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:17.536 seconds]
[sig-api-machinery] Secrets
test/e2e/common/secrets.go:35
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:13:35.578: INFO: Driver gluster doesn't support DynamicPV -- skipping
... skipping 58 lines ...
      Only supported for providers [azure] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1531
------------------------------
S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":4,"skipped":3,"failed":0}
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:13:29.564: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jul  7 08:13:29.772: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6" in namespace "projected-2108" to be "Succeeded or Failed"
Jul  7 08:13:29.783: INFO: Pod "downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.759096ms
Jul  7 08:13:31.799: INFO: Pod "downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027427851s
Jul  7 08:13:33.884: INFO: Pod "downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112364381s
Jul  7 08:13:35.965: INFO: Pod "downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192839015s
Jul  7 08:13:38.027: INFO: Pod "downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255318698s
Jul  7 08:13:40.253: INFO: Pod "downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6": Phase="Running", Reason="", readiness=true. Elapsed: 10.480708433s
Jul  7 08:13:42.352: INFO: Pod "downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.580538757s
STEP: Saw pod success
Jul  7 08:13:42.353: INFO: Pod "downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6" satisfied condition "Succeeded or Failed"
Jul  7 08:13:42.417: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6 container client-container: <nil>
STEP: delete the pod
Jul  7 08:13:42.942: INFO: Waiting for pod downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6 to disappear
Jul  7 08:13:42.947: INFO: Pod downwardapi-volume-48add2a0-68ca-4185-a88b-933ccafba8e6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:13.454 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":3,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:13:43.081: INFO: Only supported for providers [azure] (not skeleton)
... skipping 76 lines ...
  test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":1,"skipped":10,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:13:43.933: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:175

... skipping 54 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:13:44.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2274" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.","total":-1,"completed":2,"skipped":16,"failed":1,"failures":["[sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:13:44.321: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 63 lines ...
STEP: Scaling down replication controller to zero
STEP: Scaling ReplicationController slow-terminating-unready-pod in namespace services-7017 to 0
STEP: Update service to not tolerate unready services
STEP: Check if pod is unreachable
Jul  7 08:13:31.830: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-7017 execpod-rtgnp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/; test "$?" -ne "0"'
Jul  7 08:13:34.686: INFO: rc: 1
Jul  7 08:13:34.686: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: NOW: 2020-07-07 08:13:34.331528497 +0000 UTC m=+29.667439294, err error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-7017 execpod-rtgnp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/; test "$?" -ne "0":
Command stdout:
NOW: 2020-07-07 08:13:34.331528497 +0000 UTC m=+29.667439294
stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/
+ test 0 -ne 0
command terminated with exit code 1

error:
exit status 1
Jul  7 08:13:36.686: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-7017 execpod-rtgnp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/; test "$?" -ne "0"'
Jul  7 08:13:39.848: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/\n+ test 7 -ne 0\n"
Jul  7 08:13:39.848: INFO: stdout: ""
STEP: Update service to tolerate unready services again
STEP: Check if terminating pod is available through service
Jul  7 08:13:39.902: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-7017 execpod-rtgnp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/'
Jul  7 08:13:43.482: INFO: rc: 7
Jul  7 08:13:43.482: INFO: expected un-ready endpoint for Service slow-terminating-unready-pod, stdout: , err error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-7017 execpod-rtgnp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/:
Command stdout:

stderr:
+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/
command terminated with exit code 7

error:
exit status 7
Jul  7 08:13:45.490: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=services-7017 execpod-rtgnp -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/'
Jul  7 08:13:47.414: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://tolerate-unready.services-7017.svc.cluster.local:80/\n"
Jul  7 08:13:47.414: INFO: stdout: "NOW: 2020-07-07 08:13:47.179782815 +0000 UTC m=+42.515693622"
STEP: Remove pods immediately
STEP: stopping RC slow-terminating-unready-pod in namespace services-7017
... skipping 9 lines ...
• [SLOW TEST:63.556 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should create endpoints for unready pods
  test/e2e/network/service.go:1979
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:13:48.300: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/framework/framework.go:175

... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:47
[It] volume on default medium should have the correct mode using FSGroup
  test/e2e/common/empty_dir.go:68
STEP: Creating a pod to test emptydir volume type on node default medium
Jul  7 08:13:35.999: INFO: Waiting up to 5m0s for pod "pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2" in namespace "emptydir-6630" to be "Succeeded or Failed"
Jul  7 08:13:36.070: INFO: Pod "pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2": Phase="Pending", Reason="", readiness=false. Elapsed: 71.204406ms
Jul  7 08:13:38.199: INFO: Pod "pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199971421s
Jul  7 08:13:40.372: INFO: Pod "pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.373470749s
Jul  7 08:13:42.504: INFO: Pod "pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.504907206s
Jul  7 08:13:44.607: INFO: Pod "pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608362572s
Jul  7 08:13:46.635: INFO: Pod "pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2": Phase="Running", Reason="", readiness=true. Elapsed: 10.636664052s
Jul  7 08:13:48.703: INFO: Pod "pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.70382372s
STEP: Saw pod success
Jul  7 08:13:48.703: INFO: Pod "pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2" satisfied condition "Succeeded or Failed"
Jul  7 08:13:48.782: INFO: Trying to get logs from node kind-worker2 pod pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2 container test-container: <nil>
STEP: delete the pod
Jul  7 08:13:49.317: INFO: Waiting for pod pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2 to disappear
Jul  7 08:13:49.473: INFO: Pod pod-f0ec7eda-956e-4a96-b40b-c0f7f0aa49c2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
... skipping 6 lines ...
test/e2e/common/empty_dir.go:42
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:45
    volume on default medium should have the correct mode using FSGroup
    test/e2e/common/empty_dir.go:68
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup","total":-1,"completed":13,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:13:49.694: INFO: Driver local doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:175

... skipping 45 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_downwardapi.go:107
STEP: Creating a pod to test downward API volume plugin
Jul  7 08:13:43.732: INFO: Waiting up to 5m0s for pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d" in namespace "projected-9963" to be "Succeeded or Failed"
Jul  7 08:13:43.788: INFO: Pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.291374ms
Jul  7 08:13:45.927: INFO: Pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194986687s
Jul  7 08:13:47.978: INFO: Pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245706346s
Jul  7 08:13:50.038: INFO: Pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.305557864s
Jul  7 08:13:52.074: INFO: Pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.342080569s
Jul  7 08:13:54.207: INFO: Pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.475017991s
Jul  7 08:13:56.633: INFO: Pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d": Phase="Running", Reason="", readiness=true. Elapsed: 12.901246517s
Jul  7 08:13:58.800: INFO: Pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.068058087s
STEP: Saw pod success
Jul  7 08:13:58.822: INFO: Pod "metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d" satisfied condition "Succeeded or Failed"
Jul  7 08:13:59.268: INFO: Trying to get logs from node kind-worker pod metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d container client-container: <nil>
STEP: delete the pod
Jul  7 08:14:01.101: INFO: Waiting for pod metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d to disappear
Jul  7 08:14:01.294: INFO: Pod metadata-volume-7f27431a-995a-437c-86da-d6540b2bb90d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:18.669 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:36
  should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/projected_downwardapi.go:107
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]","total":-1,"completed":6,"skipped":6,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:14:01.788: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 60 lines ...
test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":-1,"completed":14,"skipped":76,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:14:03.507: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 5 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-link]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
... skipping 62 lines ...
• [SLOW TEST:82.390 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":8,"skipped":48,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:14:04.268: INFO: Driver hostPath doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/framework/framework.go:175

... skipping 32 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:14:04.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6506" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":15,"skipped":81,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 107 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:162
    should function for pod-Service: udp
    test/e2e/network/networking.go:173
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":-1,"completed":3,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] crictl
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 26 lines ...
STEP: Creating a kubernetes client
Jul  7 08:12:29.527: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
STEP: creating the pod
Jul  7 08:12:29.764: INFO: PodSpec: initContainers in spec.initContainers
Jul  7 08:14:09.365: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-566c730e-01c1-4edb-b491-e4f6d673c053", GenerateName:"", Namespace:"init-container-2357", SelfLink:"/api/v1/namespaces/init-container-2357/pods/pod-init-566c730e-01c1-4edb-b491-e4f6d673c053", UID:"1ce777a5-8127-4408-a1bc-e9ab0ffc51cd", ResourceVersion:"11817", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729706349, loc:(*time.Location)(0x7db1f60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"764899780"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0019603a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019603c0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001960440), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001960460)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jslxj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0019645c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jslxj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jslxj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jslxj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000724508), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f8c150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0007246b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0007247e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0007247e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0007247ec), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002b60110), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706349, loc:(*time.Location)(0x7db1f60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706349, loc:(*time.Location)(0x7db1f60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706349, loc:(*time.Location)(0x7db1f60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706349, loc:(*time.Location)(0x7db1f60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.2.99", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.99"}}, StartTime:(*v1.Time)(0xc0019604c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f8c380)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000f8c3f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d6c8b31fcd3c05cfec9931dc1d2f681074230a0bfc0fd4482d706713698f6343", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001960560), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001960540), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc000724aaf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:175
Jul  7 08:14:09.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2357" for this suite.


• [SLOW TEST:100.135 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:592
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":12,"skipped":88,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Container Lifecycle Hook
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 37 lines ...
test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:14:11.238: INFO: Driver local doesn't support ext4 -- skipping
... skipping 103 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:438
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":4,"skipped":53,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Mounted volume expand
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 109 lines ...
  test/e2e/apps/rc.go:68

  Only supported for providers [gce gke] (not skeleton)

  test/e2e/apps/rc.go:70
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":-1,"completed":12,"skipped":136,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:13:29.843: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 121 lines ...
• [SLOW TEST:157.684 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
  test/e2e/apps/disruption.go:222
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage =\u003e should allow an eviction","total":-1,"completed":6,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
... skipping 90 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should store data
      test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":5,"skipped":41,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:14:20.798: INFO: Only supported for providers [azure] (not skeleton)
... skipping 165 lines ...
Jul  7 08:14:07.368: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test override arguments
Jul  7 08:14:08.308: INFO: Waiting up to 5m0s for pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80" in namespace "containers-9475" to be "Succeeded or Failed"
Jul  7 08:14:08.340: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 31.478072ms
Jul  7 08:14:10.361: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053357251s
Jul  7 08:14:12.556: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247402163s
Jul  7 08:14:14.663: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 6.354700311s
Jul  7 08:14:16.874: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56570234s
Jul  7 08:14:18.896: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 10.587557973s
Jul  7 08:14:20.945: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 12.636431184s
Jul  7 08:14:22.976: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Pending", Reason="", readiness=false. Elapsed: 14.667612415s
Jul  7 08:14:25.035: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Running", Reason="", readiness=true. Elapsed: 16.726821415s
Jul  7 08:14:27.052: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Running", Reason="", readiness=true. Elapsed: 18.744148545s
Jul  7 08:14:29.338: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.029528671s
STEP: Saw pod success
Jul  7 08:14:29.338: INFO: Pod "client-containers-4a35f73e-1597-406c-8236-605ed100fb80" satisfied condition "Succeeded or Failed"
Jul  7 08:14:29.392: INFO: Trying to get logs from node kind-worker pod client-containers-4a35f73e-1597-406c-8236-605ed100fb80 container test-container: <nil>
STEP: delete the pod
Jul  7 08:14:29.741: INFO: Waiting for pod client-containers-4a35f73e-1597-406c-8236-605ed100fb80 to disappear
Jul  7 08:14:29.909: INFO: Pod client-containers-4a35f73e-1597-406c-8236-605ed100fb80 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:22.693 seconds]
[k8s.io] Docker Containers
test/e2e/framework/framework.go:592
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":24,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 35 lines ...
• [SLOW TEST:25.941 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":5,"skipped":70,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:14:40.074: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating secret with name projected-secret-test-2a0dcbfd-9f14-4daa-b3ce-b22d377ffb0c
STEP: Creating a pod to test consume secrets
Jul  7 08:14:10.306: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a" in namespace "projected-6553" to be "Succeeded or Failed"
Jul  7 08:14:10.354: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.647113ms
Jul  7 08:14:12.524: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218460818s
Jul  7 08:14:14.665: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359394212s
Jul  7 08:14:16.849: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.54343661s
Jul  7 08:14:18.906: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.599943788s
Jul  7 08:14:20.945: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.638701636s
... skipping 4 lines ...
Jul  7 08:14:31.258: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Running", Reason="", readiness=true. Elapsed: 20.952024651s
Jul  7 08:14:33.484: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Running", Reason="", readiness=true. Elapsed: 23.178462892s
Jul  7 08:14:35.510: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Running", Reason="", readiness=true. Elapsed: 25.204385218s
Jul  7 08:14:37.558: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Running", Reason="", readiness=true. Elapsed: 27.252312691s
Jul  7 08:14:39.703: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.397135485s
STEP: Saw pod success
Jul  7 08:14:39.703: INFO: Pod "pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a" satisfied condition "Succeeded or Failed"
Jul  7 08:14:39.832: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a container secret-volume-test: <nil>
STEP: delete the pod
Jul  7 08:14:40.418: INFO: Waiting for pod pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a to disappear
Jul  7 08:14:40.471: INFO: Pod pod-projected-secrets-e116bba9-4687-4793-ab85-1cd7690ae32a no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:31.056 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":90,"failed":0}
[BeforeEach] [sig-api-machinery] client-go should negotiate
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:14:40.725: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[It] watch and report errors with accept "application/vnd.kubernetes.protobuf"
  test/e2e/apimachinery/protocol.go:46
Jul  7 08:14:40.726: INFO: >>> kubeConfig: /root/.kube/kind-test-config
[AfterEach] [sig-api-machinery] client-go should negotiate
  test/e2e/framework/framework.go:175
Jul  7 08:14:40.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf\"","total":-1,"completed":14,"skipped":90,"failed":0}

SS
------------------------------
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 11 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:14:41.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-863" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":-1,"completed":15,"skipped":92,"failed":0}

SSS
------------------------------
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:246.890 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:592
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":13,"skipped":136,"failed":0}
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:14:16.115: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test env composition
Jul  7 08:14:17.100: INFO: Waiting up to 5m0s for pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f" in namespace "var-expansion-7155" to be "Succeeded or Failed"
Jul  7 08:14:17.353: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Pending", Reason="", readiness=false. Elapsed: 253.528104ms
Jul  7 08:14:19.434: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.334355609s
Jul  7 08:14:21.500: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.399694521s
Jul  7 08:14:23.541: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441446397s
Jul  7 08:14:25.619: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519318247s
Jul  7 08:14:27.689: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.589596677s
... skipping 2 lines ...
Jul  7 08:14:33.823: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Running", Reason="", readiness=true. Elapsed: 16.722824831s
Jul  7 08:14:35.860: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Running", Reason="", readiness=true. Elapsed: 18.760236739s
Jul  7 08:14:37.875: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Running", Reason="", readiness=true. Elapsed: 20.774753638s
Jul  7 08:14:39.963: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Running", Reason="", readiness=true. Elapsed: 22.86327971s
Jul  7 08:14:42.065: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.96491502s
STEP: Saw pod success
Jul  7 08:14:42.065: INFO: Pod "var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f" satisfied condition "Succeeded or Failed"
Jul  7 08:14:42.123: INFO: Trying to get logs from node kind-worker2 pod var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f container dapi-container: <nil>
STEP: delete the pod
Jul  7 08:14:42.604: INFO: Waiting for pod var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f to disappear
Jul  7 08:14:42.671: INFO: Pod var-expansion-bf34daf0-8569-4e2f-a41c-15b77992906f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:26.722 seconds]
[k8s.io] Variable Expansion
test/e2e/framework/framework.go:592
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":136,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:14:42.857: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-239c6f25-5c03-47f3-b818-57c7d50b013a
STEP: Creating a pod to test consume configMaps
Jul  7 08:14:21.718: INFO: Waiting up to 5m0s for pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a" in namespace "configmap-9297" to be "Succeeded or Failed"
Jul  7 08:14:21.997: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Pending", Reason="", readiness=false. Elapsed: 279.688868ms
Jul  7 08:14:24.135: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.417378592s
Jul  7 08:14:26.185: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467298329s
Jul  7 08:14:28.384: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.66662368s
Jul  7 08:14:30.466: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748342171s
Jul  7 08:14:32.495: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.777234529s
Jul  7 08:14:34.570: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Running", Reason="", readiness=true. Elapsed: 12.852646419s
Jul  7 08:14:36.580: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Running", Reason="", readiness=true. Elapsed: 14.862084982s
Jul  7 08:14:38.585: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Running", Reason="", readiness=true. Elapsed: 16.867670998s
Jul  7 08:14:40.615: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Running", Reason="", readiness=true. Elapsed: 18.897292538s
Jul  7 08:14:42.669: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Running", Reason="", readiness=true. Elapsed: 20.951782634s
Jul  7 08:14:44.788: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.070580109s
STEP: Saw pod success
Jul  7 08:14:44.788: INFO: Pod "pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a" satisfied condition "Succeeded or Failed"
Jul  7 08:14:44.820: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a container configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:14:44.919: INFO: Waiting for pod pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a to disappear
Jul  7 08:14:44.943: INFO: Pod pod-configmaps-53d3882f-a6e3-4970-a871-dab738f8926a no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:23.740 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":68,"failed":0}

SS
------------------------------
[BeforeEach] [sig-storage] CSI mock volume
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 180 lines ...
Jul  7 08:13:16.958: INFO: PersistentVolumeClaim pvc-46nwj found but phase is Pending instead of Bound.
Jul  7 08:13:19.096: INFO: PersistentVolumeClaim pvc-46nwj found but phase is Pending instead of Bound.
Jul  7 08:13:21.105: INFO: PersistentVolumeClaim pvc-46nwj found but phase is Pending instead of Bound.
Jul  7 08:13:23.197: INFO: PersistentVolumeClaim pvc-46nwj found but phase is Pending instead of Bound.
Jul  7 08:13:25.320: INFO: PersistentVolumeClaim pvc-46nwj found but phase is Pending instead of Bound.
Jul  7 08:13:27.655: INFO: PersistentVolumeClaim pvc-46nwj found but phase is Pending instead of Bound.
Jul  7 08:13:29.656: FAIL: Failed waiting for PVC to be bound PersistentVolumeClaims [pvc-46nwj] not all in phase Bound within 5m0s
Unexpected error:
    <*errors.errorString | 0xc0007da330>: {
        s: "PersistentVolumeClaims [pvc-46nwj] not all in phase Bound within 5m0s",
    }
    PersistentVolumeClaims [pvc-46nwj] not all in phase Bound within 5m0s
occurred

... skipping 476 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:298
    should not be passed when podInfoOnMount=false [It]
    test/e2e/storage/csi_mock_volume.go:348

    Jul  7 08:13:29.656: Failed waiting for PVC to be bound PersistentVolumeClaims [pvc-46nwj] not all in phase Bound within 5m0s
    Unexpected error:
        <*errors.errorString | 0xc0007da330>: {
            s: "PersistentVolumeClaims [pvc-46nwj] not all in phase Bound within 5m0s",
        }
        PersistentVolumeClaims [pvc-46nwj] not all in phase Bound within 5m0s
    occurred

    test/e2e/storage/csi_mock_volume.go:1010
------------------------------
{"msg":"FAILED [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false","total":-1,"completed":1,"skipped":28,"failed":1,"failures":["[sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false"]}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:14:45.712: INFO: Only supported for providers [gce gke] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/framework/framework.go:175

... skipping 22 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jul  7 08:14:18.410: INFO: Waiting up to 5m0s for pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61" in namespace "downward-api-10" to be "Succeeded or Failed"
Jul  7 08:14:18.482: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Pending", Reason="", readiness=false. Elapsed: 72.20865ms
Jul  7 08:14:20.534: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123819778s
Jul  7 08:14:22.812: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401866849s
Jul  7 08:14:24.969: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.558700816s
Jul  7 08:14:27.018: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Pending", Reason="", readiness=false. Elapsed: 8.608434249s
Jul  7 08:14:29.059: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Pending", Reason="", readiness=false. Elapsed: 10.649240978s
... skipping 4 lines ...
Jul  7 08:14:39.379: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Running", Reason="", readiness=true. Elapsed: 20.96937713s
Jul  7 08:14:41.442: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Running", Reason="", readiness=true. Elapsed: 23.031594999s
Jul  7 08:14:43.477: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Running", Reason="", readiness=true. Elapsed: 25.06722043s
Jul  7 08:14:45.551: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Running", Reason="", readiness=true. Elapsed: 27.14137604s
Jul  7 08:14:47.593: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.183078647s
STEP: Saw pod success
Jul  7 08:14:47.593: INFO: Pod "downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61" satisfied condition "Succeeded or Failed"
Jul  7 08:14:47.615: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61 container client-container: <nil>
STEP: delete the pod
Jul  7 08:14:47.719: INFO: Waiting for pod downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61 to disappear
Jul  7 08:14:47.794: INFO: Pod downwardapi-volume-279b0bea-14b2-4d7f-a420-2aa6123e1a61 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:30.209 seconds]
[sig-storage] Downward API volume
test/e2e/common/downwardapi_volume.go:37
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":16,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:14:48.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4143" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":8,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:14:48.919: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 224 lines ...
Jul  7 08:13:13.287: INFO: PersistentVolumeClaim csi-hostpathf48sv found but phase is Pending instead of Bound.
Jul  7 08:13:15.420: INFO: PersistentVolumeClaim csi-hostpathf48sv found but phase is Pending instead of Bound.
Jul  7 08:13:17.501: INFO: PersistentVolumeClaim csi-hostpathf48sv found but phase is Pending instead of Bound.
Jul  7 08:13:19.534: INFO: PersistentVolumeClaim csi-hostpathf48sv found and phase=Bound (4m24.901508719s)
STEP: Expanding non-expandable pvc
Jul  7 08:13:19.928: INFO: currentPvcSize {{1048576 0} {<nil>} 1Mi BinarySI}, newSize {{1074790400 0} {<nil>}  BinarySI}
Jul  7 08:13:20.043: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:22.227: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:24.120: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:26.103: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:28.156: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:30.136: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:32.111: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:34.150: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:36.112: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:38.212: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:40.349: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:42.099: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:44.079: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:46.080: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:48.116: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:50.197: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
Jul  7 08:13:50.421: INFO: Error updating pvc csi-hostpathf48sv: persistentvolumeclaims "csi-hostpathf48sv" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resize
STEP: Deleting pvc
Jul  7 08:13:50.421: INFO: Deleting PersistentVolumeClaim "csi-hostpathf48sv"
Jul  7 08:13:50.541: INFO: Waiting up to 5m0s for PersistentVolume pvc-efdb5161-6a4e-4660-b4ed-3bc85f1bc178 to get deleted
Jul  7 08:13:50.618: INFO: PersistentVolume pvc-efdb5161-6a4e-4660-b4ed-3bc85f1bc178 found and phase=Bound (77.449205ms)
Jul  7 08:13:55.770: INFO: PersistentVolume pvc-efdb5161-6a4e-4660-b4ed-3bc85f1bc178 was removed
STEP: Deleting sc
... skipping 50 lines ...
  test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (block volmode)] volume-expand
    test/e2e/storage/testsuites/base.go:126
      should not allow expansion of pvcs without AllowVolumeExpansion property
      test/e2e/storage/testsuites/volume_expand.go:147
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property","total":-1,"completed":5,"skipped":105,"failed":0}

SSS
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":21,"failed":0}
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:14:42.131: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 19 lines ...
  test/e2e/common/runtime.go:41
    on terminated container
    test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":21,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:02.234: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/framework/framework.go:175

... skipping 41 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jul  7 08:14:45.551: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e" in namespace "projected-1252" to be "Succeeded or Failed"
Jul  7 08:14:45.591: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Pending", Reason="", readiness=false. Elapsed: 39.443953ms
Jul  7 08:14:47.625: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073051997s
Jul  7 08:14:49.678: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.12617438s
Jul  7 08:14:51.739: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187111967s
Jul  7 08:14:53.757: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.205616076s
Jul  7 08:14:55.818: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.266831975s
Jul  7 08:14:57.924: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Running", Reason="", readiness=true. Elapsed: 12.372544502s
Jul  7 08:15:00.508: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Running", Reason="", readiness=true. Elapsed: 14.956872836s
Jul  7 08:15:02.578: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Running", Reason="", readiness=true. Elapsed: 17.026223078s
Jul  7 08:15:04.630: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.07816298s
STEP: Saw pod success
Jul  7 08:15:04.630: INFO: Pod "downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e" satisfied condition "Succeeded or Failed"
Jul  7 08:15:04.675: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e container client-container: <nil>
STEP: delete the pod
Jul  7 08:15:04.825: INFO: Waiting for pod downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e to disappear
Jul  7 08:15:04.830: INFO: Pod downwardapi-volume-4c6507e6-55e9-46f0-96af-e1bd3bcfd44e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:19.686 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":70,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
... skipping 21 lines ...
Jul  7 08:14:52.367: INFO: PersistentVolumeClaim pvc-g8t26 found but phase is Pending instead of Bound.
Jul  7 08:14:54.379: INFO: PersistentVolumeClaim pvc-g8t26 found and phase=Bound (14.527430815s)
Jul  7 08:14:54.379: INFO: Waiting up to 3m0s for PersistentVolume local-qvmxw to have phase Bound
Jul  7 08:14:54.434: INFO: PersistentVolume local-qvmxw found and phase=Bound (54.328434ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-jq7r
STEP: Creating a pod to test exec-volume-test
Jul  7 08:14:54.569: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-jq7r" in namespace "volume-3560" to be "Succeeded or Failed"
Jul  7 08:14:54.574: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Pending", Reason="", readiness=false. Elapsed: 5.550824ms
Jul  7 08:14:56.653: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084506545s
Jul  7 08:14:58.856: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286603154s
Jul  7 08:15:01.072: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503515094s
Jul  7 08:15:03.094: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524756279s
Jul  7 08:15:05.098: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.528811898s
Jul  7 08:15:07.122: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.553051142s
Jul  7 08:15:09.139: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.569903858s
Jul  7 08:15:11.213: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Running", Reason="", readiness=true. Elapsed: 16.644551273s
Jul  7 08:15:13.335: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Running", Reason="", readiness=true. Elapsed: 18.766376085s
Jul  7 08:15:15.398: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Running", Reason="", readiness=true. Elapsed: 20.829065819s
Jul  7 08:15:17.474: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.905276742s
STEP: Saw pod success
Jul  7 08:15:17.474: INFO: Pod "exec-volume-test-preprovisionedpv-jq7r" satisfied condition "Succeeded or Failed"
Jul  7 08:15:17.485: INFO: Trying to get logs from node kind-worker pod exec-volume-test-preprovisionedpv-jq7r container exec-container-preprovisionedpv-jq7r: <nil>
STEP: delete the pod
Jul  7 08:15:17.690: INFO: Waiting for pod exec-volume-test-preprovisionedpv-jq7r to disappear
Jul  7 08:15:17.702: INFO: Pod exec-volume-test-preprovisionedpv-jq7r no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-jq7r
Jul  7 08:15:17.702: INFO: Deleting pod "exec-volume-test-preprovisionedpv-jq7r" in namespace "volume-3560"
... skipping 17 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":5,"skipped":27,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:19.465: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 222 lines ...
test/e2e/framework/framework.go:592
  when create a pod with lifecycle hook
  test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":83,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:20.482: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 24 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-616825f9-872e-47ed-a7fb-c41765f0b6e5
STEP: Creating a pod to test consume configMaps
Jul  7 08:15:00.826: INFO: Waiting up to 5m0s for pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696" in namespace "configmap-6992" to be "Succeeded or Failed"
Jul  7 08:15:01.115: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Pending", Reason="", readiness=false. Elapsed: 288.961575ms
Jul  7 08:15:03.140: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314446163s
Jul  7 08:15:05.156: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330273075s
Jul  7 08:15:07.197: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Pending", Reason="", readiness=false. Elapsed: 6.370655258s
Jul  7 08:15:09.213: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Pending", Reason="", readiness=false. Elapsed: 8.386651977s
Jul  7 08:15:11.243: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Pending", Reason="", readiness=false. Elapsed: 10.416607084s
Jul  7 08:15:13.325: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Pending", Reason="", readiness=false. Elapsed: 12.498941395s
Jul  7 08:15:15.406: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Pending", Reason="", readiness=false. Elapsed: 14.579877433s
Jul  7 08:15:17.499: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Running", Reason="", readiness=true. Elapsed: 16.673008558s
Jul  7 08:15:19.711: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Running", Reason="", readiness=true. Elapsed: 18.885505771s
Jul  7 08:15:21.770: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.944242978s
STEP: Saw pod success
Jul  7 08:15:21.770: INFO: Pod "pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696" satisfied condition "Succeeded or Failed"
Jul  7 08:15:21.778: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696 container configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:15:22.230: INFO: Waiting for pod pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696 to disappear
Jul  7 08:15:22.350: INFO: Pod pod-configmaps-63dfb8fd-0e0d-4ee7-8c75-2662761e3696 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:23.201 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":108,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
... skipping 77 lines ...
  test/e2e/kubectl/portforward.go:474
    that expects NO client request
    test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":9,"skipped":34,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:24.307: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:175

... skipping 231 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should store data
      test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":4,"skipped":42,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:27.043: INFO: Driver gluster doesn't support ext4 -- skipping
... skipping 69 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl diff
  test/e2e/kubectl/kubectl.go:883
    should check if kubectl diff finds a difference for Deployments [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":6,"skipped":53,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:27.317: INFO: Driver csi-hostpath doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:175

... skipping 196 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Inline-volume (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should store data
      test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":15,"skipped":142,"failed":0}
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:35
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:15:33.415: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 8 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:15:33.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-9884" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":16,"skipped":142,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
... skipping 2 lines ...
Jul  7 08:14:01.816: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to unmount after the subpath directory is deleted
  test/e2e/storage/testsuites/subpath.go:438
Jul  7 08:14:02.434: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  7 08:14:03.168: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2390" in namespace "provisioning-2390" to be "Succeeded or Failed"
Jul  7 08:14:03.439: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 270.531329ms
Jul  7 08:14:05.710: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.542369839s
Jul  7 08:14:08.023: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 4.855365739s
Jul  7 08:14:10.143: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 6.974569042s
Jul  7 08:14:12.355: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 9.18722192s
Jul  7 08:14:14.503: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Running", Reason="", readiness=true. Elapsed: 11.334377489s
Jul  7 08:14:16.629: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.461143871s
STEP: Saw pod success
Jul  7 08:14:16.629: INFO: Pod "hostpath-symlink-prep-provisioning-2390" satisfied condition "Succeeded or Failed"
Jul  7 08:14:16.629: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2390" in namespace "provisioning-2390"
Jul  7 08:14:17.059: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2390" to be fully deleted
Jul  7 08:14:17.235: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-tbd8
Jul  7 08:14:37.562: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=provisioning-2390 pod-subpath-test-inlinevolume-tbd8 --container test-container-volume-inlinevolume-tbd8 -- /bin/sh -c rm -r /test-volume/provisioning-2390'
Jul  7 08:14:40.062: INFO: stderr: ""
Jul  7 08:14:40.062: INFO: stdout: ""
STEP: Deleting pod pod-subpath-test-inlinevolume-tbd8
Jul  7 08:14:40.062: INFO: Deleting pod "pod-subpath-test-inlinevolume-tbd8" in namespace "provisioning-2390"
Jul  7 08:14:40.162: INFO: Wait up to 5m0s for pod "pod-subpath-test-inlinevolume-tbd8" to be fully deleted
STEP: Deleting pod
Jul  7 08:15:20.294: INFO: Deleting pod "pod-subpath-test-inlinevolume-tbd8" in namespace "provisioning-2390"
Jul  7 08:15:20.397: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-2390" in namespace "provisioning-2390" to be "Succeeded or Failed"
Jul  7 08:15:20.410: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 13.086133ms
Jul  7 08:15:22.472: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074613162s
Jul  7 08:15:24.598: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201360595s
Jul  7 08:15:26.746: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 6.349178147s
Jul  7 08:15:28.785: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 8.388404588s
Jul  7 08:15:30.845: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 10.448190355s
Jul  7 08:15:32.906: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 12.508627057s
Jul  7 08:15:34.925: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Pending", Reason="", readiness=false. Elapsed: 14.527530858s
Jul  7 08:15:36.985: INFO: Pod "hostpath-symlink-prep-provisioning-2390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.5878978s
STEP: Saw pod success
Jul  7 08:15:36.985: INFO: Pod "hostpath-symlink-prep-provisioning-2390" satisfied condition "Succeeded or Failed"
Jul  7 08:15:36.985: INFO: Deleting pod "hostpath-symlink-prep-provisioning-2390" in namespace "provisioning-2390"
Jul  7 08:15:37.156: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-2390" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jul  7 08:15:37.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-2390" for this suite.
... skipping 6 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:438
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":7,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:37.390: INFO: Only supported for providers [vsphere] (not skeleton)
[AfterEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:175

... skipping 146 lines ...
[It] should support readOnly file specified in the volumeMount [LinuxOnly]
  test/e2e/storage/testsuites/subpath.go:375
Jul  7 08:15:02.814: INFO: In-tree plugin kubernetes.io/empty-dir is not migrated, not validating any metrics
Jul  7 08:15:02.814: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-n8qn
STEP: Creating a pod to test subpath
Jul  7 08:15:02.856: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-n8qn" in namespace "provisioning-8905" to be "Succeeded or Failed"
Jul  7 08:15:02.890: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Pending", Reason="", readiness=false. Elapsed: 33.906228ms
Jul  7 08:15:04.968: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112146349s
Jul  7 08:15:07.020: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163885708s
Jul  7 08:15:09.084: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.228308431s
Jul  7 08:15:11.134: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.278208147s
Jul  7 08:15:13.185: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Pending", Reason="", readiness=false. Elapsed: 10.329411778s
... skipping 7 lines ...
Jul  7 08:15:30.141: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Pending", Reason="", readiness=false. Elapsed: 27.285295998s
Jul  7 08:15:32.197: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Pending", Reason="", readiness=false. Elapsed: 29.341393791s
Jul  7 08:15:34.253: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Pending", Reason="", readiness=false. Elapsed: 31.397043418s
Jul  7 08:15:36.578: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Running", Reason="", readiness=true. Elapsed: 33.721665103s
Jul  7 08:15:38.622: INFO: Pod "pod-subpath-test-inlinevolume-n8qn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.766352964s
STEP: Saw pod success
Jul  7 08:15:38.623: INFO: Pod "pod-subpath-test-inlinevolume-n8qn" satisfied condition "Succeeded or Failed"
Jul  7 08:15:38.715: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-inlinevolume-n8qn container test-container-subpath-inlinevolume-n8qn: <nil>
STEP: delete the pod
Jul  7 08:15:39.055: INFO: Waiting for pod pod-subpath-test-inlinevolume-n8qn to disappear
Jul  7 08:15:39.142: INFO: Pod pod-subpath-test-inlinevolume-n8qn no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-n8qn
Jul  7 08:15:39.142: INFO: Deleting pod "pod-subpath-test-inlinevolume-n8qn" in namespace "provisioning-8905"
... skipping 12 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:375
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":6,"skipped":23,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:39.305: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volume-expand
  test/e2e/framework/framework.go:175

... skipping 55 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:15:40.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5401" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":7,"skipped":26,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:40.187: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 23 lines ...
Jul  7 08:15:24.359: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  7 08:15:24.687: INFO: Waiting up to 5m0s for pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c" in namespace "emptydir-5732" to be "Succeeded or Failed"
Jul  7 08:15:24.839: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 152.56215ms
Jul  7 08:15:27.111: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.424501561s
Jul  7 08:15:29.251: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.564360687s
Jul  7 08:15:31.410: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723855148s
Jul  7 08:15:33.503: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.816543509s
Jul  7 08:15:35.696: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.009518795s
Jul  7 08:15:37.885: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.198369655s
Jul  7 08:15:40.105: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.418595958s
Jul  7 08:15:42.223: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Running", Reason="", readiness=true. Elapsed: 17.536740806s
Jul  7 08:15:44.280: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Running", Reason="", readiness=true. Elapsed: 19.593734171s
Jul  7 08:15:46.423: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.736319267s
STEP: Saw pod success
Jul  7 08:15:46.423: INFO: Pod "pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c" satisfied condition "Succeeded or Failed"
Jul  7 08:15:46.910: INFO: Trying to get logs from node kind-worker2 pod pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c container test-container: <nil>
STEP: delete the pod
Jul  7 08:15:47.399: INFO: Waiting for pod pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c to disappear
Jul  7 08:15:47.493: INFO: Pod pod-699a9bda-66c3-4b68-a691-70ea1ad3ee3c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:23.475 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:42
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":49,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 15 lines ...
• [SLOW TEST:12.909 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:592
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":38,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:51.179: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 149 lines ...
• [SLOW TEST:23.898 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":69,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:15:51.991: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 139 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: azure-disk]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Only supported for providers [azure] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1531
------------------------------
... skipping 148 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:126
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:345
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":2,"skipped":24,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 59 lines ...
• [SLOW TEST:58.348 seconds]
[sig-network] Networking
test/e2e/network/framework.go:23
  should check kube-proxy urls
  test/e2e/network/networking.go:149
------------------------------
{"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":-1,"completed":8,"skipped":72,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:03.263: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:175

... skipping 76 lines ...
• [SLOW TEST:36.315 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":5,"skipped":47,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:03.397: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 57 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:16:03.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2412" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":9,"skipped":81,"failed":0}

S
------------------------------
[BeforeEach] [sig-windows] Windows volume mounts 
  test/e2e/windows/framework.go:28
Jul  7 08:16:03.835: INFO: Only supported for node OS distro [windows] (not debian)
... skipping 135 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
    test/e2e/storage/testsuites/base.go:126
      should not mount / map unused volumes in a pod
      test/e2e/storage/testsuites/volumemode.go:345
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod","total":-1,"completed":6,"skipped":72,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:09.353: INFO: Driver azure-disk doesn't support ntfs -- skipping
... skipping 347 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:438
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":7,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:11.109: INFO: Driver hostPath doesn't support PreprovisionedPV -- skipping
... skipping 100 lines ...
      Only supported for providers [aws] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1677
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":115,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:16:10.483: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 39 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:16:12.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-535" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":-1,"completed":8,"skipped":116,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:12.834: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 121 lines ...
Jul  7 08:16:00.647: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir volume type on tmpfs
Jul  7 08:16:01.214: INFO: Waiting up to 5m0s for pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08" in namespace "emptydir-8157" to be "Succeeded or Failed"
Jul  7 08:16:01.698: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Pending", Reason="", readiness=false. Elapsed: 484.163323ms
Jul  7 08:16:03.797: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.582957354s
Jul  7 08:16:05.868: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Pending", Reason="", readiness=false. Elapsed: 4.653417038s
Jul  7 08:16:07.923: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Pending", Reason="", readiness=false. Elapsed: 6.708981939s
Jul  7 08:16:10.057: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Pending", Reason="", readiness=false. Elapsed: 8.842895969s
Jul  7 08:16:12.106: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Pending", Reason="", readiness=false. Elapsed: 10.892104019s
Jul  7 08:16:14.115: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Pending", Reason="", readiness=false. Elapsed: 12.900946532s
Jul  7 08:16:16.167: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Running", Reason="", readiness=true. Elapsed: 14.952965587s
Jul  7 08:16:18.326: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Running", Reason="", readiness=true. Elapsed: 17.111939324s
Jul  7 08:16:20.472: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Running", Reason="", readiness=true. Elapsed: 19.257364963s
Jul  7 08:16:22.537: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.322712442s
STEP: Saw pod success
Jul  7 08:16:22.537: INFO: Pod "pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08" satisfied condition "Succeeded or Failed"
Jul  7 08:16:22.575: INFO: Trying to get logs from node kind-worker2 pod pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08 container test-container: <nil>
STEP: delete the pod
Jul  7 08:16:22.883: INFO: Waiting for pod pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08 to disappear
Jul  7 08:16:22.892: INFO: Pod pod-b60dc8c2-3cf4-43a9-8bfe-cd34ca8e0c08 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
... skipping 112 lines ...
• [SLOW TEST:50.905 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should remove from active list jobs that have been deleted
  test/e2e/apps/cronjob.go:197
------------------------------
{"msg":"PASSED [sig-apps] CronJob should remove from active list jobs that have been deleted","total":-1,"completed":17,"skipped":143,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:24.662: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/framework/framework.go:175

... skipping 71 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gluster]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver gluster doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
... skipping 50 lines ...
  test/e2e/common/runtime.go:41
    on terminated container
    test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
      test/e2e/common/runtime.go:171
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]","total":-1,"completed":8,"skipped":41,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:42.459: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:175

... skipping 99 lines ...
      Only supported for providers [azure] (not skeleton)

      test/e2e/storage/drivers/in_tree.go:1531
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":25,"failed":0}
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:16:22.934: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:47
[It] nonexistent volume subPath should have the correct mode and owner using FSGroup
  test/e2e/common/empty_dir.go:60
STEP: Creating a pod to test emptydir subpath on tmpfs
Jul  7 08:16:23.151: INFO: Waiting up to 5m0s for pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3" in namespace "emptydir-9494" to be "Succeeded or Failed"
Jul  7 08:16:23.170: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.113634ms
Jul  7 08:16:25.305: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153943658s
Jul  7 08:16:27.369: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.217944677s
Jul  7 08:16:29.405: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.253706194s
Jul  7 08:16:31.442: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.290624561s
Jul  7 08:16:33.477: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.326115197s
Jul  7 08:16:35.568: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.416883793s
Jul  7 08:16:37.662: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.510645316s
Jul  7 08:16:39.916: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Running", Reason="", readiness=true. Elapsed: 16.764519635s
Jul  7 08:16:41.938: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Running", Reason="", readiness=true. Elapsed: 18.786576927s
Jul  7 08:16:43.977: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.826054668s
STEP: Saw pod success
Jul  7 08:16:43.977: INFO: Pod "pod-efdad125-aa6f-473e-81d8-2d67f049e4c3" satisfied condition "Succeeded or Failed"
Jul  7 08:16:43.994: INFO: Trying to get logs from node kind-worker2 pod pod-efdad125-aa6f-473e-81d8-2d67f049e4c3 container test-container: <nil>
STEP: delete the pod
Jul  7 08:16:44.130: INFO: Waiting for pod pod-efdad125-aa6f-473e-81d8-2d67f049e4c3 to disappear
Jul  7 08:16:44.183: INFO: Pod pod-efdad125-aa6f-473e-81d8-2d67f049e4c3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
... skipping 6 lines ...
test/e2e/common/empty_dir.go:42
  when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup]
  test/e2e/common/empty_dir.go:45
    nonexistent volume subPath should have the correct mode and owner using FSGroup
    test/e2e/common/empty_dir.go:60
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup","total":-1,"completed":4,"skipped":25,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:16:12.984: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  7 08:16:13.502: INFO: Waiting up to 5m0s for pod "pod-c4e7e920-9cdd-465c-9719-18f581219905" in namespace "emptydir-9070" to be "Succeeded or Failed"
Jul  7 08:16:13.535: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Pending", Reason="", readiness=false. Elapsed: 32.943762ms
Jul  7 08:16:15.606: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104317461s
Jul  7 08:16:17.726: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223835962s
Jul  7 08:16:19.751: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249015236s
Jul  7 08:16:21.813: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Pending", Reason="", readiness=false. Elapsed: 8.311249921s
Jul  7 08:16:23.834: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Pending", Reason="", readiness=false. Elapsed: 10.331990065s
... skipping 5 lines ...
Jul  7 08:16:36.118: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Pending", Reason="", readiness=false. Elapsed: 22.615684578s
Jul  7 08:16:38.134: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Running", Reason="", readiness=true. Elapsed: 24.632235155s
Jul  7 08:16:40.184: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Running", Reason="", readiness=true. Elapsed: 26.682166839s
Jul  7 08:16:42.342: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Running", Reason="", readiness=true. Elapsed: 28.839945414s
Jul  7 08:16:44.367: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.864723859s
STEP: Saw pod success
Jul  7 08:16:44.367: INFO: Pod "pod-c4e7e920-9cdd-465c-9719-18f581219905" satisfied condition "Succeeded or Failed"
Jul  7 08:16:44.372: INFO: Trying to get logs from node kind-worker2 pod pod-c4e7e920-9cdd-465c-9719-18f581219905 container test-container: <nil>
STEP: delete the pod
Jul  7 08:16:44.478: INFO: Waiting for pod pod-c4e7e920-9cdd-465c-9719-18f581219905 to disappear
Jul  7 08:16:44.549: INFO: Pod pod-c4e7e920-9cdd-465c-9719-18f581219905 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:31.772 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:42
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":135,"failed":0}

SSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:44.786: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 149 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:16:45.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4571" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle","total":-1,"completed":10,"skipped":159,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 70 lines ...
  test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":11,"skipped":54,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:47.244: INFO: Only supported for providers [vsphere] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (default fs)] provisioning
  test/e2e/framework/framework.go:175

... skipping 57 lines ...
      Driver vsphere doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:185
------------------------------
SSSSSSSS
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":9,"skipped":49,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:16:10.485: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 57 lines ...
  test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and read from pod1
      test/e2e/storage/persistent_volumes-local.go:228
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1","total":-1,"completed":10,"skipped":49,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:16:59.642: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)] volumes
  test/e2e/framework/framework.go:175

... skipping 66 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward API volume plugin
Jul  7 08:16:46.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6" in namespace "projected-8551" to be "Succeeded or Failed"
Jul  7 08:16:46.539: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 145.915826ms
Jul  7 08:16:48.584: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191161942s
Jul  7 08:16:50.613: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21997367s
Jul  7 08:16:52.637: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244623037s
Jul  7 08:16:54.643: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249926043s
Jul  7 08:16:56.717: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.324882672s
Jul  7 08:16:58.754: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.361045235s
Jul  7 08:17:01.154: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.761725641s
Jul  7 08:17:03.258: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.865463421s
STEP: Saw pod success
Jul  7 08:17:03.258: INFO: Pod "downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6" satisfied condition "Succeeded or Failed"
Jul  7 08:17:03.325: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6 container client-container: <nil>
STEP: delete the pod
Jul  7 08:17:03.545: INFO: Waiting for pod downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6 to disappear
Jul  7 08:17:03.579: INFO: Pod downwardapi-volume-f4f9caaa-b87c-490a-bfdf-2b8c6871f3f6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:17.602 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":163,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:03.595: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:175

... skipping 45 lines ...
Jul  7 08:16:47.269: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating a pod to test downward api env vars
Jul  7 08:16:47.644: INFO: Waiting up to 5m0s for pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f" in namespace "downward-api-7556" to be "Succeeded or Failed"
Jul  7 08:16:47.693: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.238009ms
Jul  7 08:16:49.712: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067901894s
Jul  7 08:16:51.725: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081223586s
Jul  7 08:16:53.749: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104849282s
Jul  7 08:16:55.787: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142811571s
Jul  7 08:16:57.818: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.174058345s
Jul  7 08:16:59.937: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.293240603s
Jul  7 08:17:01.980: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.33593792s
Jul  7 08:17:03.986: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.34149529s
STEP: Saw pod success
Jul  7 08:17:03.986: INFO: Pod "downward-api-5cec4de8-a350-4836-9f07-513f5f49832f" satisfied condition "Succeeded or Failed"
Jul  7 08:17:04.074: INFO: Trying to get logs from node kind-worker2 pod downward-api-5cec4de8-a350-4836-9f07-513f5f49832f container dapi-container: <nil>
STEP: delete the pod
Jul  7 08:17:04.249: INFO: Waiting for pod downward-api-5cec4de8-a350-4836-9f07-513f5f49832f to disappear
Jul  7 08:17:04.374: INFO: Pod downward-api-5cec4de8-a350-4836-9f07-513f5f49832f no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:17.180 seconds]
[sig-node] Downward API
test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":69,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:04.469: INFO: Driver emptydir doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:175

... skipping 153 lines ...
STEP: creating an object not containing a namespace with in-cluster config
Jul  7 08:16:43.702: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-9577 httpd -- /bin/sh -x -c /tmp/kubectl create -f /tmp/invalid-configmap-without-namespace.yaml --v=6 2>&1'
Jul  7 08:16:50.889: INFO: rc: 255
STEP: trying to use kubectl with invalid token
Jul  7 08:16:50.889: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-9577 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1'
Jul  7 08:16:53.411: INFO: rc: 255
Jul  7 08:16:53.411: INFO: got err error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-9577 httpd -- /bin/sh -x -c /tmp/kubectl get pods --token=invalid --v=7 2>&1:
Command stdout:
I0707 08:16:52.951072     199 merged_client_builder.go:163] Using in-cluster namespace
I0707 08:16:52.951350     199 merged_client_builder.go:121] Using in-cluster configuration
I0707 08:16:52.955530     199 merged_client_builder.go:121] Using in-cluster configuration
I0707 08:16:53.082236     199 merged_client_builder.go:121] Using in-cluster configuration
I0707 08:16:53.082686     199 round_trippers.go:420] GET https://10.96.0.1:443/api/v1/namespaces/kubectl-9577/pods?limit=500
... skipping 8 lines ...
  "metadata": {},
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}]
F0707 08:16:53.128590     199 helpers.go:115] error: You must be logged in to the server (Unauthorized)
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0001a8001, 0xc0007461c0, 0x68, 0x1af)
	vendor/k8s.io/klog/v2/klog.go:996 +0xb8
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2f123a0, 0xc000000003, 0x0, 0x0, 0xc00036e070, 0x2d07572, 0xa, 0x73, 0x40a200)
	vendor/k8s.io/klog/v2/klog.go:945 +0x19d
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2f123a0, 0x3, 0x0, 0x0, 0x2, 0xc000949ab0, 0x1, 0x1)
	vendor/k8s.io/klog/v2/klog.go:718 +0x15e
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	vendor/k8s.io/klog/v2/klog.go:1442
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc0000e0f80, 0x3a, 0x1)
	staging/src/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1e8
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e344c0, 0xc0005b2f80, 0x1c64940)
	staging/src/k8s.io/kubectl/pkg/cmd/util/helpers.go:177 +0x8cc
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	staging/src/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc000369b80, 0xc0007051d0, 0x1, 0x3)
... skipping 72 lines ...
	vendor/golang.org/x/net/http2/transport.go:674 +0x64a

stderr:
+ /tmp/kubectl get pods '--token=invalid' '--v=7'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid server
Jul  7 08:16:53.412: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-9577 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1'
Jul  7 08:16:56.290: INFO: rc: 255
Jul  7 08:16:56.290: INFO: got err error running /home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-9577 httpd -- /bin/sh -x -c /tmp/kubectl get pods --server=invalid --v=6 2>&1:
Command stdout:
I0707 08:16:55.540422     213 merged_client_builder.go:163] Using in-cluster namespace
I0707 08:16:55.702879     213 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 162 milliseconds
I0707 08:16:55.702962     213 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: no such host
I0707 08:16:55.749830     213 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 46 milliseconds
I0707 08:16:55.749994     213 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: no such host
I0707 08:16:55.750032     213 shortcut.go:89] Error loading discovery information: Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: no such host
I0707 08:16:55.826194     213 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 75 milliseconds
I0707 08:16:55.826285     213 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: no such host
I0707 08:16:55.983325     213 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 151 milliseconds
I0707 08:16:55.984258     213 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: no such host
I0707 08:16:56.016926     213 round_trippers.go:443] GET http://invalid/api?timeout=32s  in 31 milliseconds
I0707 08:16:56.016998     213 cached_discovery.go:121] skipped caching discovery info due to Get "http://invalid/api?timeout=32s": dial tcp: lookup invalid on 10.96.0.10:53: no such host
I0707 08:16:56.017041     213 helpers.go:234] Connection error: Get http://invalid/api?timeout=32s: dial tcp: lookup invalid on 10.96.0.10:53: no such host
F0707 08:16:56.017069     213 helpers.go:115] Unable to connect to the server: dial tcp: lookup invalid on 10.96.0.10:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc0001b2001, 0xc0002fa700, 0x87, 0x1b7)
	vendor/k8s.io/klog/v2/klog.go:996 +0xb8
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2f123a0, 0xc000000003, 0x0, 0x0, 0xc00039ac40, 0x2d07572, 0xa, 0x73, 0x40a200)
	vendor/k8s.io/klog/v2/klog.go:945 +0x19d
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2f123a0, 0x3, 0x0, 0x0, 0x2, 0xc000855ab0, 0x1, 0x1)
	vendor/k8s.io/klog/v2/klog.go:718 +0x15e
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	vendor/k8s.io/klog/v2/klog.go:1442
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc000a30e40, 0x58, 0x1)
	staging/src/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1e8
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e33860, 0xc00038f560, 0x1c64940)
	staging/src/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x958
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	staging/src/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc00041adc0, 0xc0002f49c0, 0x1, 0x3)
... skipping 24 lines ...
	staging/src/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96

stderr:
+ /tmp/kubectl get pods '--server=invalid' '--v=6'
command terminated with exit code 255

error:
exit status 255
STEP: trying to use kubectl with invalid namespace
Jul  7 08:16:56.290: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config exec --namespace=kubectl-9577 httpd -- /bin/sh -x -c /tmp/kubectl get pods --namespace=invalid --v=6 2>&1'
Jul  7 08:16:59.046: INFO: stderr: "+ /tmp/kubectl get pods '--namespace=invalid' '--v=6'\n"
Jul  7 08:16:59.046: INFO: stdout: "I0707 08:16:58.399035     225 merged_client_builder.go:121] Using in-cluster configuration\nI0707 08:16:58.504903     225 merged_client_builder.go:121] Using in-cluster configuration\nI0707 08:16:58.675575     225 merged_client_builder.go:121] Using in-cluster configuration\nI0707 08:16:58.754836     225 round_trippers.go:443] GET https://10.96.0.1:443/api/v1/namespaces/invalid/pods?limit=500 200 OK in 78 milliseconds\nNo resources found in invalid namespace.\n"
Jul  7 08:16:59.046: INFO: stdout: I0707 08:16:58.399035     225 merged_client_builder.go:121] Using in-cluster configuration
... skipping 72 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:390
    should handle in-cluster config
    test/e2e/kubectl/kubectl.go:648
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should handle in-cluster config","total":-1,"completed":17,"skipped":86,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:07.258: INFO: Driver hostPathSymlink doesn't support ext4 -- skipping
... skipping 52 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:17:07.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-3036" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler","total":-1,"completed":13,"skipped":74,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:07.845: INFO: Driver emptydir doesn't support DynamicPV -- skipping
... skipping 153 lines ...
• [SLOW TEST:90.080 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent
  test/e2e/apps/cronjob.go:142
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent","total":-1,"completed":8,"skipped":31,"failed":0}

SSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":2,"skipped":36,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:12:52.687: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pv
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 36 lines ...
Jul  7 08:15:54.821: INFO: PersistentVolumeClaim pvc-hfhfv found and phase=Bound (4.235996477s)
Jul  7 08:15:54.821: INFO: Waiting up to 3m0s for PersistentVolume nfs-snn9z to have phase Bound
Jul  7 08:15:55.026: INFO: PersistentVolume nfs-snn9z found and phase=Bound (205.69903ms)
STEP: Checking pod has write access to PersistentVolume
Jul  7 08:15:55.042: INFO: Creating nfs test pod
Jul  7 08:15:55.070: INFO: Pod should terminate with exitcode 0 (success)
Jul  7 08:15:55.070: INFO: Waiting up to 5m0s for pod "pvc-tester-zvbpk" in namespace "pv-6117" to be "Succeeded or Failed"
Jul  7 08:15:55.138: INFO: Pod "pvc-tester-zvbpk": Phase="Pending", Reason="", readiness=false. Elapsed: 67.765867ms
Jul  7 08:15:57.175: INFO: Pod "pvc-tester-zvbpk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105482773s
Jul  7 08:15:59.197: INFO: Pod "pvc-tester-zvbpk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.127520187s
Jul  7 08:16:01.698: INFO: Pod "pvc-tester-zvbpk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.628010219s
Jul  7 08:16:03.845: INFO: Pod "pvc-tester-zvbpk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.774612682s
Jul  7 08:16:05.928: INFO: Pod "pvc-tester-zvbpk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.85776174s
... skipping 10 lines ...
Jul  7 08:16:28.823: INFO: Pod "pvc-tester-zvbpk": Phase="Running", Reason="", readiness=true. Elapsed: 33.752982791s
Jul  7 08:16:30.910: INFO: Pod "pvc-tester-zvbpk": Phase="Running", Reason="", readiness=true. Elapsed: 35.840354045s
Jul  7 08:16:33.021: INFO: Pod "pvc-tester-zvbpk": Phase="Running", Reason="", readiness=true. Elapsed: 37.950803079s
Jul  7 08:16:35.062: INFO: Pod "pvc-tester-zvbpk": Phase="Running", Reason="", readiness=true. Elapsed: 39.992428064s
Jul  7 08:16:37.081: INFO: Pod "pvc-tester-zvbpk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.010699167s
STEP: Saw pod success
Jul  7 08:16:37.081: INFO: Pod "pvc-tester-zvbpk" satisfied condition "Succeeded or Failed"
Jul  7 08:16:37.081: INFO: Pod pvc-tester-zvbpk succeeded 
Jul  7 08:16:37.081: INFO: Deleting pod "pvc-tester-zvbpk" in namespace "pv-6117"
Jul  7 08:16:37.118: INFO: Wait up to 5m0s for pod "pvc-tester-zvbpk" to be fully deleted
STEP: Deleting the PVC to invoke the reclaim policy.
Jul  7 08:16:37.137: INFO: Deleting PVC pvc-hfhfv to trigger reclamation of PV nfs-snn9z
Jul  7 08:16:37.137: INFO: Deleting PersistentVolumeClaim "pvc-hfhfv"
... skipping 24 lines ...
  test/e2e/storage/persistent_volumes.go:122
    with Single PV - PVC pairs
    test/e2e/storage/persistent_volumes.go:155
      create a PV and a pre-bound PVC: test write access
      test/e2e/storage/persistent_volumes.go:196
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access","total":-1,"completed":3,"skipped":36,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:12.393: INFO: Driver gluster doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/framework/framework.go:175

... skipping 83 lines ...
Jul  7 08:15:52.383: INFO: PersistentVolumeClaim pvc-cqm5f found but phase is Pending instead of Bound.
Jul  7 08:15:54.390: INFO: PersistentVolumeClaim pvc-cqm5f found and phase=Bound (6.093806526s)
Jul  7 08:15:54.390: INFO: Waiting up to 3m0s for PersistentVolume nfs-4qlbg to have phase Bound
Jul  7 08:15:54.395: INFO: PersistentVolume nfs-4qlbg found and phase=Bound (4.912392ms)
STEP: Creating pod exec-volume-test-preprovisionedpv-kdh9
STEP: Creating a pod to test exec-volume-test
Jul  7 08:15:54.454: INFO: Waiting up to 5m0s for pod "exec-volume-test-preprovisionedpv-kdh9" in namespace "volume-6290" to be "Succeeded or Failed"
Jul  7 08:15:54.674: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Pending", Reason="", readiness=false. Elapsed: 220.39339ms
Jul  7 08:15:56.694: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240402003s
Jul  7 08:15:59.033: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.579403068s
Jul  7 08:16:01.178: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.724283009s
Jul  7 08:16:03.242: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.78861999s
Jul  7 08:16:05.301: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.847395602s
Jul  7 08:16:07.386: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.932708833s
Jul  7 08:16:09.423: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.969325606s
Jul  7 08:16:11.630: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Running", Reason="", readiness=true. Elapsed: 17.176048691s
Jul  7 08:16:13.789: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.33537193s
STEP: Saw pod success
Jul  7 08:16:13.789: INFO: Pod "exec-volume-test-preprovisionedpv-kdh9" satisfied condition "Succeeded or Failed"
Jul  7 08:16:13.884: INFO: Trying to get logs from node kind-worker2 pod exec-volume-test-preprovisionedpv-kdh9 container exec-container-preprovisionedpv-kdh9: <nil>
STEP: delete the pod
Jul  7 08:16:14.114: INFO: Waiting for pod exec-volume-test-preprovisionedpv-kdh9 to disappear
Jul  7 08:16:14.147: INFO: Pod exec-volume-test-preprovisionedpv-kdh9 no longer exists
STEP: Deleting pod exec-volume-test-preprovisionedpv-kdh9
Jul  7 08:16:14.147: INFO: Deleting pod "exec-volume-test-preprovisionedpv-kdh9" in namespace "volume-6290"
... skipping 18 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should allow exec of files on the volume
      test/e2e/storage/testsuites/volumes.go:192
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume","total":-1,"completed":4,"skipped":11,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:13.634: INFO: Only supported for providers [vsphere] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/framework/framework.go:175

... skipping 74 lines ...
• [SLOW TEST:82.709 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":49,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:17:14.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5432" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":5,"skipped":13,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:14.715: INFO: Only supported for providers [gce gke] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/framework/framework.go:175

... skipping 57 lines ...
      Driver hostPath doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
S
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":96,"failed":0}
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:16:20.307: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul  7 08:16:41.085: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
Jul  7 08:16:41.103: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Jul  7 08:17:11.963: INFO: error waiting for conversion to succeed during setup: conversion webhook for stable.example.com/v2, Kind=E2e-test-crd-webhook-8228-crd failed: Post "https://e2e-test-crd-conversion-webhook.crd-webhook-8771.svc:9443/crdconvert?timeout=30s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:175
Jul  7 08:17:14.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8771" for this suite.
... skipping 4 lines ...
• [SLOW TEST:56.021 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":11,"skipped":96,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:16.361: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 138 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-projected-all-test-volume-3b96fdd9-6452-401f-9a67-266986f5f79e
STEP: Creating secret with name secret-projected-all-test-volume-c9ac01d6-806d-4dbb-b325-68d56bf2ec5d
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul  7 08:17:08.080: INFO: Waiting up to 5m0s for pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168" in namespace "projected-586" to be "Succeeded or Failed"
Jul  7 08:17:08.275: INFO: Pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168": Phase="Pending", Reason="", readiness=false. Elapsed: 195.460835ms
Jul  7 08:17:10.398: INFO: Pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318595121s
Jul  7 08:17:12.439: INFO: Pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35880707s
Jul  7 08:17:14.478: INFO: Pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397992968s
Jul  7 08:17:16.558: INFO: Pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168": Phase="Pending", Reason="", readiness=false. Elapsed: 8.477690313s
Jul  7 08:17:18.583: INFO: Pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168": Phase="Pending", Reason="", readiness=false. Elapsed: 10.502905778s
Jul  7 08:17:20.697: INFO: Pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168": Phase="Running", Reason="", readiness=true. Elapsed: 12.617042735s
Jul  7 08:17:22.826: INFO: Pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.746264396s
STEP: Saw pod success
Jul  7 08:17:22.826: INFO: Pod "projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168" satisfied condition "Succeeded or Failed"
Jul  7 08:17:22.935: INFO: Trying to get logs from node kind-worker2 pod projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168 container projected-all-volume-test: <nil>
STEP: delete the pod
Jul  7 08:17:23.240: INFO: Waiting for pod projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168 to disappear
Jul  7 08:17:23.288: INFO: Pod projected-volume-bc31560d-188a-4b8c-9597-cc14a8592168 no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:16.059 seconds]
[sig-storage] Projected combined
test/e2e/common/projected_combined.go:32
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":95,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:23.351: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175

... skipping 222 lines ...
Jul  7 08:17:03.500: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-69665d47f8\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 08:17:05.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-69665d47f8\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 08:17:07.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-69665d47f8\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 08:17:09.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-69665d47f8\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 08:17:11.506: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-69665d47f8\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 08:17:11.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-69665d47f8\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  7 08:17:11.532: FAIL: deploying extension apiserver in namespace aggregator-654
Unexpected error:
    <*errors.errorString | 0xc000fbf930>: {
        s: "error waiting for deployment \"sample-apiserver-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-apiserver-deployment-69665d47f8\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "sample-apiserver-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-69665d47f8\" is progressing."}}, CollisionCount:(*int32)(nil)}
occurred

Full Stack Trace
k8s.io/kubernetes/test/e2e/apimachinery.TestSampleAPIServer(0xc000a042c0, 0xc00161d700, 0xc000e00940, 0x37)
	test/e2e/apimachinery/aggregator.go:339 +0x2cf0
k8s.io/kubernetes/test/e2e/apimachinery.glob..func1.3()
... skipping 221 lines ...
[sig-api-machinery] Aggregator
test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]
  test/e2e/framework/framework.go:597

  Jul  7 08:17:11.532: deploying extension apiserver in namespace aggregator-654
  Unexpected error:
      <*errors.errorString | 0xc000fbf930>: {
          s: "error waiting for deployment \"sample-apiserver-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-apiserver-deployment-69665d47f8\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}",
      }
      error waiting for deployment "sample-apiserver-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706329, loc:(*time.Location)(0x7db1f60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729706328, loc:(*time.Location)(0x7db1f60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-69665d47f8\" is progressing."}}, CollisionCount:(*int32)(nil)}
  occurred

  test/e2e/apimachinery/aggregator.go:339
------------------------------
{"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":1,"skipped":30,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (delayed binding)] topology
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:29.130: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: blockfs]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
... skipping 6 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-edaa99c3-1d1f-417d-a8f3-f832d94161a0
STEP: Creating a pod to test consume configMaps
Jul  7 08:17:14.245: INFO: Waiting up to 5m0s for pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989" in namespace "configmap-5014" to be "Succeeded or Failed"
Jul  7 08:17:14.334: INFO: Pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989": Phase="Pending", Reason="", readiness=false. Elapsed: 88.794919ms
Jul  7 08:17:16.429: INFO: Pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183833969s
Jul  7 08:17:18.491: INFO: Pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989": Phase="Pending", Reason="", readiness=false. Elapsed: 4.246132207s
Jul  7 08:17:20.522: INFO: Pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989": Phase="Pending", Reason="", readiness=false. Elapsed: 6.276695913s
Jul  7 08:17:22.679: INFO: Pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989": Phase="Pending", Reason="", readiness=false. Elapsed: 8.434000059s
Jul  7 08:17:24.713: INFO: Pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989": Phase="Pending", Reason="", readiness=false. Elapsed: 10.467582773s
Jul  7 08:17:26.811: INFO: Pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989": Phase="Running", Reason="", readiness=true. Elapsed: 12.566276857s
Jul  7 08:17:28.902: INFO: Pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.657157372s
STEP: Saw pod success
Jul  7 08:17:28.902: INFO: Pod "pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989" satisfied condition "Succeeded or Failed"
Jul  7 08:17:28.951: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989 container configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:17:29.073: INFO: Waiting for pod pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989 to disappear
Jul  7 08:17:29.094: INFO: Pod pod-configmaps-623f81ea-75e2-4363-b6ef-347c753ea989 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:15.380 seconds]
[sig-storage] ConfigMap
test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":52,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 66 lines ...
• [SLOW TEST:46.495 seconds]
[sig-storage] PVC Protection
test/e2e/storage/utils/framework.go:23
  Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
  test/e2e/storage/pvc_protection.go:143
------------------------------
{"msg":"PASSED [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable","total":-1,"completed":5,"skipped":26,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:17:14.791: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  test/e2e/common/projected_secret.go:90
STEP: Creating projection with secret that has name projected-secret-test-c3a1dafc-61ee-4ffe-967a-6b632074d98a
STEP: Creating a pod to test consume secrets
Jul  7 08:17:16.477: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8" in namespace "projected-1881" to be "Succeeded or Failed"
Jul  7 08:17:16.539: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Pending", Reason="", readiness=false. Elapsed: 61.992055ms
Jul  7 08:17:18.566: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089180523s
Jul  7 08:17:20.676: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199168881s
Jul  7 08:17:22.684: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.206445387s
Jul  7 08:17:24.713: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.235902371s
Jul  7 08:17:26.813: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.335714567s
Jul  7 08:17:28.840: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.362416801s
Jul  7 08:17:30.868: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Running", Reason="", readiness=true. Elapsed: 14.390799631s
Jul  7 08:17:33.012: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Running", Reason="", readiness=true. Elapsed: 16.534333544s
Jul  7 08:17:35.084: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.606432499s
STEP: Saw pod success
Jul  7 08:17:35.084: INFO: Pod "pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8" satisfied condition "Succeeded or Failed"
Jul  7 08:17:35.140: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jul  7 08:17:35.846: INFO: Waiting for pod pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8 to disappear
Jul  7 08:17:35.949: INFO: Pod pod-projected-secrets-3d58a96d-1eff-4134-bee7-9091f31968a8 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:175
... skipping 5 lines ...
• [SLOW TEST:21.438 seconds]
[sig-storage] Projected secret
test/e2e/common/projected_secret.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
  test/e2e/common/projected_secret.go:90
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController evictions: enough pods, absolute =\u003e should allow an eviction","total":-1,"completed":7,"skipped":56,"failed":0}
[BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral
  test/e2e/storage/testsuites/base.go:127
[BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral
  test/e2e/storage/testsuites/ephemeral.go:81
[BeforeEach] [Testpattern: inline ephemeral CSI volume] ephemeral
  test/e2e/framework/framework.go:174
... skipping 108 lines ...
  test/e2e/storage/csi_volumes.go:39
    [Testpattern: inline ephemeral CSI volume] ephemeral
    test/e2e/storage/testsuites/base.go:126
      should support two pods which share the same volume
      test/e2e/storage/testsuites/ephemeral.go:141
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume","total":-1,"completed":8,"skipped":56,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:37.797: INFO: Only supported for providers [openstack] (not skeleton)
... skipping 52 lines ...
  test/e2e/framework/framework.go:175
Jul  7 08:17:39.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-9506" for this suite.

•
------------------------------
{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":9,"skipped":63,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:39.786: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes
  test/e2e/framework/framework.go:175

... skipping 123 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  test/e2e/kubectl/kubectl.go:914
    should check if kubectl can dry-run update Pods [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":11,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:41.082: INFO: Driver hostPath doesn't support DynamicPV -- skipping
... skipping 435 lines ...
  test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should store data
      test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data","total":-1,"completed":6,"skipped":80,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:46.183: INFO: Driver nfs doesn't support ntfs -- skipping
... skipping 91 lines ...
Jul  7 08:16:47.144: INFO: PersistentVolumeClaim nfsz47m6 found but phase is Pending instead of Bound.
Jul  7 08:16:49.184: INFO: PersistentVolumeClaim nfsz47m6 found but phase is Pending instead of Bound.
Jul  7 08:16:51.495: INFO: PersistentVolumeClaim nfsz47m6 found but phase is Pending instead of Bound.
Jul  7 08:16:53.524: INFO: PersistentVolumeClaim nfsz47m6 found and phase=Bound (16.817303681s)
STEP: Creating pod pod-subpath-test-dynamicpv-dzls
STEP: Creating a pod to test subpath
Jul  7 08:16:53.577: INFO: Waiting up to 5m0s for pod "pod-subpath-test-dynamicpv-dzls" in namespace "provisioning-5520" to be "Succeeded or Failed"
Jul  7 08:16:53.612: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Pending", Reason="", readiness=false. Elapsed: 35.085885ms
Jul  7 08:16:55.682: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104799168s
Jul  7 08:16:57.697: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119836185s
Jul  7 08:16:59.809: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Pending", Reason="", readiness=false. Elapsed: 6.232124679s
Jul  7 08:17:01.826: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Pending", Reason="", readiness=false. Elapsed: 8.249389693s
Jul  7 08:17:03.877: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Pending", Reason="", readiness=false. Elapsed: 10.299755541s
... skipping 2 lines ...
Jul  7 08:17:10.543: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Pending", Reason="", readiness=false. Elapsed: 16.96629721s
Jul  7 08:17:12.629: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Pending", Reason="", readiness=false. Elapsed: 19.052011082s
Jul  7 08:17:14.667: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Pending", Reason="", readiness=false. Elapsed: 21.090254309s
Jul  7 08:17:17.020: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Running", Reason="", readiness=false. Elapsed: 23.443234227s
Jul  7 08:17:19.224: INFO: Pod "pod-subpath-test-dynamicpv-dzls": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.647291779s
STEP: Saw pod success
Jul  7 08:17:19.224: INFO: Pod "pod-subpath-test-dynamicpv-dzls" satisfied condition "Succeeded or Failed"
Jul  7 08:17:19.229: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-dynamicpv-dzls container test-container-volume-dynamicpv-dzls: <nil>
STEP: delete the pod
Jul  7 08:17:19.708: INFO: Waiting for pod pod-subpath-test-dynamicpv-dzls to disappear
Jul  7 08:17:19.818: INFO: Pod pod-subpath-test-dynamicpv-dzls no longer exists
STEP: Deleting pod pod-subpath-test-dynamicpv-dzls
Jul  7 08:17:19.818: INFO: Deleting pod "pod-subpath-test-dynamicpv-dzls" in namespace "provisioning-5520"
... skipping 20 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support non-existent path
      test/e2e/storage/testsuites/subpath.go:190
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path","total":-1,"completed":7,"skipped":91,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 25 lines ...
• [SLOW TEST:21.019 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated
  test/e2e/apps/disruption.go:97
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated","total":-1,"completed":2,"skipped":36,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

SS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 264 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:592
    should provide basic identity
    test/e2e/apps/statefulset.go:124
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity","total":-1,"completed":1,"skipped":1,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:17:51.528: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 60 lines ...
      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
SS
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":43,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:17:29.649: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 71 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl patch
  test/e2e/kubectl/kubectl.go:1485
    should add annotations for pods in rc  [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 24 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
Jul  7 08:17:42.020: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7" in namespace "security-context-test-6042" to be "Succeeded or Failed"
Jul  7 08:17:42.097: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 77.349409ms
Jul  7 08:17:44.111: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091317308s
Jul  7 08:17:46.212: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.192253506s
Jul  7 08:17:48.368: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.347728337s
Jul  7 08:17:50.476: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456446438s
Jul  7 08:17:52.524: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.504372909s
... skipping 5 lines ...
Jul  7 08:18:05.053: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 23.03260336s
Jul  7 08:18:07.086: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.066152718s
Jul  7 08:18:09.098: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 27.077963578s
Jul  7 08:18:11.143: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Pending", Reason="", readiness=false. Elapsed: 29.12278216s
Jul  7 08:18:13.269: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Running", Reason="", readiness=true. Elapsed: 31.248551194s
Jul  7 08:18:15.319: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.29858841s
Jul  7 08:18:15.319: INFO: Pod "busybox-readonly-false-81fd077d-0818-4118-8c09-4feda9d9fea7" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jul  7 08:18:15.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6042" for this suite.


... skipping 2 lines ...
test/e2e/framework/framework.go:592
  When creating a pod with readOnlyRootFilesystem
  test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":80,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:15.661: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/framework/framework.go:175

... skipping 46 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: emptydir]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver emptydir doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
... skipping 138 lines ...
  test/e2e/storage/persistent_volumes-local.go:188
    One pod requesting one prebound PVC
    test/e2e/storage/persistent_volumes-local.go:205
      should be able to mount volume and write from pod1
      test/e2e/storage/persistent_volumes-local.go:234
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":12,"skipped":122,"failed":0}

SSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:16.997: INFO: Driver local doesn't support ext4 -- skipping
... skipping 69 lines ...
Jul  7 08:17:07.954: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support readOnly directory specified in the volumeMount
  test/e2e/storage/testsuites/subpath.go:360
Jul  7 08:17:08.455: INFO: In-tree plugin kubernetes.io/host-path is not migrated, not validating any metrics
Jul  7 08:17:08.840: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3770" in namespace "provisioning-3770" to be "Succeeded or Failed"
Jul  7 08:17:08.992: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 152.1159ms
Jul  7 08:17:11.117: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277561163s
Jul  7 08:17:13.218: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377941429s
Jul  7 08:17:15.670: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 6.830385265s
Jul  7 08:17:17.701: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 8.861645928s
Jul  7 08:17:19.785: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 10.945831248s
Jul  7 08:17:21.954: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Running", Reason="", readiness=true. Elapsed: 13.113979584s
Jul  7 08:17:24.091: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Running", Reason="", readiness=true. Elapsed: 15.251359626s
Jul  7 08:17:26.114: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.273956647s
STEP: Saw pod success
Jul  7 08:17:26.114: INFO: Pod "hostpath-symlink-prep-provisioning-3770" satisfied condition "Succeeded or Failed"
Jul  7 08:17:26.114: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3770" in namespace "provisioning-3770"
Jul  7 08:17:26.208: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3770" to be fully deleted
Jul  7 08:17:26.218: INFO: Creating resource for inline volume
STEP: Creating pod pod-subpath-test-inlinevolume-nsgb
STEP: Creating a pod to test subpath
Jul  7 08:17:26.263: INFO: Waiting up to 5m0s for pod "pod-subpath-test-inlinevolume-nsgb" in namespace "provisioning-3770" to be "Succeeded or Failed"
Jul  7 08:17:26.286: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.929344ms
Jul  7 08:17:28.324: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060946493s
Jul  7 08:17:30.526: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262873909s
Jul  7 08:17:32.742: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.478172075s
Jul  7 08:17:35.029: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.765207889s
Jul  7 08:17:37.074: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.810797029s
... skipping 7 lines ...
Jul  7 08:17:54.082: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Pending", Reason="", readiness=false. Elapsed: 27.81862758s
Jul  7 08:17:56.168: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Pending", Reason="", readiness=false. Elapsed: 29.905048937s
Jul  7 08:17:58.218: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.954692783s
Jul  7 08:18:00.301: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Running", Reason="", readiness=true. Elapsed: 34.037238972s
Jul  7 08:18:02.359: INFO: Pod "pod-subpath-test-inlinevolume-nsgb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.095222929s
STEP: Saw pod success
Jul  7 08:18:02.359: INFO: Pod "pod-subpath-test-inlinevolume-nsgb" satisfied condition "Succeeded or Failed"
Jul  7 08:18:02.396: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-inlinevolume-nsgb container test-container-subpath-inlinevolume-nsgb: <nil>
STEP: delete the pod
Jul  7 08:18:02.647: INFO: Waiting for pod pod-subpath-test-inlinevolume-nsgb to disappear
Jul  7 08:18:02.653: INFO: Pod pod-subpath-test-inlinevolume-nsgb no longer exists
STEP: Deleting pod pod-subpath-test-inlinevolume-nsgb
Jul  7 08:18:02.653: INFO: Deleting pod "pod-subpath-test-inlinevolume-nsgb" in namespace "provisioning-3770"
STEP: Deleting pod
Jul  7 08:18:02.688: INFO: Deleting pod "pod-subpath-test-inlinevolume-nsgb" in namespace "provisioning-3770"
Jul  7 08:18:02.797: INFO: Waiting up to 5m0s for pod "hostpath-symlink-prep-provisioning-3770" in namespace "provisioning-3770" to be "Succeeded or Failed"
Jul  7 08:18:02.809: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 11.648023ms
Jul  7 08:18:04.827: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029705329s
Jul  7 08:18:06.857: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059395097s
Jul  7 08:18:08.896: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098682837s
Jul  7 08:18:10.914: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 8.116025568s
Jul  7 08:18:13.180: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 10.382241853s
Jul  7 08:18:15.236: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 12.438084217s
Jul  7 08:18:17.407: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Pending", Reason="", readiness=false. Elapsed: 14.609735346s
Jul  7 08:18:19.442: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Running", Reason="", readiness=true. Elapsed: 16.644960292s
Jul  7 08:18:21.504: INFO: Pod "hostpath-symlink-prep-provisioning-3770": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.706410024s
STEP: Saw pod success
Jul  7 08:18:21.504: INFO: Pod "hostpath-symlink-prep-provisioning-3770" satisfied condition "Succeeded or Failed"
Jul  7 08:18:21.504: INFO: Deleting pod "hostpath-symlink-prep-provisioning-3770" in namespace "provisioning-3770"
Jul  7 08:18:21.537: INFO: Wait up to 5m0s for pod "hostpath-symlink-prep-provisioning-3770" to be fully deleted
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jul  7 08:18:21.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "provisioning-3770" for this suite.
... skipping 6 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support readOnly directory specified in the volumeMount
      test/e2e/storage/testsuites/subpath.go:360
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount","total":-1,"completed":14,"skipped":102,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:21.716: INFO: Driver hostPathSymlink doesn't support DynamicPV -- skipping
... skipping 37 lines ...
• [SLOW TEST:7.693 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":10,"skipped":78,"failed":0}
[BeforeEach] [sig-storage] PersistentVolumes-local 
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:18:12.182: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename persistent-local-volumes-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 220 lines ...
  test/e2e/storage/csi_volumes.go:39
    [Testpattern: inline ephemeral CSI volume] ephemeral
    test/e2e/storage/testsuites/base.go:126
      should support multiple inline ephemeral volumes
      test/e2e/storage/testsuites/ephemeral.go:178
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes","total":-1,"completed":5,"skipped":27,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:28.695: INFO: Only supported for providers [aws] (not skeleton)
... skipping 51 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: local][LocalVolumeType: dir-bindmounted]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver local doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
... skipping 20 lines ...
test/e2e/framework/framework.go:592
  When creating a container with runAsNonRoot
  test/e2e/common/security_context.go:99
    should not run without a specified user ID
    test/e2e/common/security_context.go:154
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID","total":-1,"completed":15,"skipped":105,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 18 lines ...
  test/e2e/common/runtime.go:41
    when running a container with a new image
    test/e2e/common/runtime.go:266
      should not be able to pull image from invalid registry [NodeConformance]
      test/e2e/common/runtime.go:377
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]","total":-1,"completed":19,"skipped":105,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:30.167: INFO: Only supported for providers [gce gke] (not skeleton)
[AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes
  test/e2e/framework/framework.go:175

... skipping 54 lines ...
STEP: Destroying namespace "services-5836" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:735

•
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":16,"skipped":95,"failed":0}
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:16:24.502: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should allow privilege escalation when true [LinuxOnly] [NodeConformance]
  test/e2e/common/security_context.go:362
Jul  7 08:16:24.694: INFO: Waiting up to 5m0s for pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a" in namespace "security-context-test-9171" to be "Succeeded or Failed"
Jul  7 08:16:24.703: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.005089ms
Jul  7 08:16:26.758: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063897878s
Jul  7 08:16:28.769: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075684278s
Jul  7 08:16:30.819: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125657813s
Jul  7 08:16:32.857: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.163395595s
Jul  7 08:16:34.874: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.180185561s
... skipping 52 lines ...
Jul  7 08:18:25.232: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.538596243s
Jul  7 08:18:27.274: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.580198042s
Jul  7 08:18:29.380: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.685843562s
Jul  7 08:18:31.418: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.723981719s
Jul  7 08:18:33.542: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Running", Reason="", readiness=true. Elapsed: 2m8.84859239s
Jul  7 08:18:35.568: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2m10.873767742s
Jul  7 08:18:35.568: INFO: Pod "alpine-nnp-true-362ee115-343b-4994-bb5c-6e85eed8b13a" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jul  7 08:18:35.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9171" for this suite.


... skipping 2 lines ...
test/e2e/framework/framework.go:592
  when creating containers with AllowPrivilegeEscalation
  test/e2e/common/security_context.go:291
    should allow privilege escalation when true [LinuxOnly] [NodeConformance]
    test/e2e/common/security_context.go:362
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]","total":-1,"completed":17,"skipped":95,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:35.889: INFO: Only supported for providers [gce gke] (not skeleton)
... skipping 157 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] volumes
    test/e2e/storage/testsuites/base.go:126
      should store data
      test/e2e/storage/testsuites/volumes.go:151
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data","total":-1,"completed":18,"skipped":152,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:36.121: INFO: Only supported for providers [gce gke] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:175

... skipping 56 lines ...
• [SLOW TEST:103.222 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":54,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 19 lines ...
STEP: Destroying namespace "services-3752" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:735

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint","total":-1,"completed":12,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:43.936: INFO: Only supported for providers [azure] (not skeleton)
... skipping 203 lines ...
  test/e2e/storage/csi_volumes.go:39
    [Testpattern: inline ephemeral CSI volume] ephemeral
    test/e2e/storage/testsuites/base.go:126
      should create read-only inline ephemeral volume
      test/e2e/storage/testsuites/ephemeral.go:117
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume","total":-1,"completed":6,"skipped":57,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-storage] PersistentVolumes-local  [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1","total":-1,"completed":3,"skipped":38,"failed":1,"failures":["[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:44.779: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 160 lines ...
Jul  7 08:16:55.957: INFO: PersistentVolumeClaim pvc-7wg2x found but phase is Pending instead of Bound.
Jul  7 08:16:57.975: INFO: PersistentVolumeClaim pvc-7wg2x found but phase is Pending instead of Bound.
Jul  7 08:17:00.164: INFO: PersistentVolumeClaim pvc-7wg2x found but phase is Pending instead of Bound.
Jul  7 08:17:02.188: INFO: PersistentVolumeClaim pvc-7wg2x found but phase is Pending instead of Bound.
Jul  7 08:17:04.231: INFO: PersistentVolumeClaim pvc-7wg2x found and phase=Bound (1m9.842037418s)
STEP: checking for CSIInlineVolumes feature
Jul  7 08:17:35.130: INFO: Error getting logs for pod csi-inline-volume-lvbmj: the server rejected our request for an unknown reason (get pods csi-inline-volume-lvbmj)
Jul  7 08:17:35.130: INFO: Deleting pod "csi-inline-volume-lvbmj" in namespace "csi-mock-volumes-5556"
Jul  7 08:17:35.233: INFO: Wait up to 5m0s for pod "csi-inline-volume-lvbmj" to be fully deleted
STEP: Deleting the previously created pod
Jul  7 08:17:39.398: INFO: Deleting pod "pvc-volume-tester-7mzq5" in namespace "csi-mock-volumes-5556"
Jul  7 08:17:39.639: INFO: Wait up to 5m0s for pod "pvc-volume-tester-7mzq5" to be fully deleted
STEP: Checking CSI driver logs
Jul  7 08:17:50.219: INFO: Found volume attribute csi.storage.k8s.io/serviceAccount.name: default
Jul  7 08:17:50.219: INFO: Found volume attribute csi.storage.k8s.io/pod.name: pvc-volume-tester-7mzq5
Jul  7 08:17:50.219: INFO: Found volume attribute csi.storage.k8s.io/pod.namespace: csi-mock-volumes-5556
Jul  7 08:17:50.219: INFO: Found volume attribute csi.storage.k8s.io/pod.uid: e4fe70a2-3048-48ee-ac43-c2e615e2ae69
Jul  7 08:17:50.219: INFO: Found volume attribute csi.storage.k8s.io/ephemeral: false
Jul  7 08:17:50.219: INFO: Found NodeUnpublishVolume: {json: {"Method":"/csi.v1.Node/NodeUnpublishVolume","Request":{"volume_id":"4","target_path":"/var/lib/kubelet/pods/e4fe70a2-3048-48ee-ac43-c2e615e2ae69/volumes/kubernetes.io~csi/pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0/mount"},"Response":{},"Error":"","FullError":null} Method:NodeUnpublishVolume Request:{VolumeContext:map[]} FullError:{Code:OK Message:} Error:}
STEP: Deleting pod pvc-volume-tester-7mzq5
Jul  7 08:17:50.219: INFO: Deleting pod "pvc-volume-tester-7mzq5" in namespace "csi-mock-volumes-5556"
STEP: Deleting claim pvc-7wg2x
Jul  7 08:17:50.476: INFO: Waiting up to 2m0s for PersistentVolume pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0 to get deleted
Jul  7 08:17:50.531: INFO: PersistentVolume pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0 found and phase=Bound (54.562858ms)
Jul  7 08:17:52.541: INFO: PersistentVolume pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0 found and phase=Released (2.064176654s)
... skipping 45 lines ...
test/e2e/storage/utils/framework.go:23
  CSI workload information using mock driver
  test/e2e/storage/csi_mock_volume.go:298
    should be passed when podInfoOnMount=true
    test/e2e/storage/csi_mock_volume.go:348
------------------------------
{"msg":"PASSED [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true","total":-1,"completed":8,"skipped":97,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:18:49.350: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
... skipping 156 lines ...
• [SLOW TEST:34.117 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":13,"skipped":135,"failed":0}

SS
------------------------------
{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":20,"skipped":107,"failed":0}
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:18:31.572: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name configmap-test-volume-map-8a34d3ab-f01f-4cca-9376-fd90023008cd
STEP: Creating a pod to test consume configMaps
Jul  7 08:18:32.006: INFO: Waiting up to 5m0s for pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d" in namespace "configmap-4880" to be "Succeeded or Failed"
Jul  7 08:18:32.089: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 82.877793ms
Jul  7 08:18:34.261: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255031217s
Jul  7 08:18:36.522: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.515564962s
Jul  7 08:18:38.578: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.57151377s
Jul  7 08:18:40.656: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.650452679s
Jul  7 08:18:42.719: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.712604351s
... skipping 2 lines ...
Jul  7 08:18:48.987: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.981049793s
Jul  7 08:18:51.033: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.027142262s
Jul  7 08:18:53.154: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.147485504s
Jul  7 08:18:55.355: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.348953254s
Jul  7 08:18:57.398: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.391917078s
STEP: Saw pod success
Jul  7 08:18:57.398: INFO: Pod "pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d" satisfied condition "Succeeded or Failed"
Jul  7 08:18:57.548: INFO: Trying to get logs from node kind-worker pod pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d container configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:18:57.826: INFO: Waiting for pod pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d to disappear
Jul  7 08:18:57.893: INFO: Pod pod-configmaps-433c11a6-4a71-43ac-a471-acdc3494275d no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:175
... skipping 388 lines ...
test/e2e/network/framework.go:23
  version v1
  test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:19:00.039: INFO: Only supported for providers [azure] (not skeleton)
... skipping 47 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run with an image specified user ID
  test/e2e/common/security_context.go:146
Jul  7 08:18:27.453: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-2406" to be "Succeeded or Failed"
Jul  7 08:18:27.477: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 24.062563ms
Jul  7 08:18:29.513: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059710661s
Jul  7 08:18:31.625: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172276128s
Jul  7 08:18:33.640: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187321071s
Jul  7 08:18:35.676: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.223137217s
Jul  7 08:18:37.762: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.309235482s
... skipping 6 lines ...
Jul  7 08:18:52.130: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 24.6770594s
Jul  7 08:18:54.143: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 26.689402974s
Jul  7 08:18:56.227: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 28.774059396s
Jul  7 08:18:58.293: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 30.839633771s
Jul  7 08:19:00.385: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 32.931675953s
Jul  7 08:19:02.407: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.953726529s
Jul  7 08:19:02.407: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:175
Jul  7 08:19:02.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2406" for this suite.


... skipping 2 lines ...
test/e2e/framework/framework.go:592
  When creating a container with runAsNonRoot
  test/e2e/common/security_context.go:99
    should run with an image specified user ID
    test/e2e/common/security_context.go:146
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":11,"skipped":82,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:19:02.513: INFO: Only supported for providers [vsphere] (not skeleton)
[AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode
  test/e2e/framework/framework.go:175

... skipping 148 lines ...
  test/e2e/storage/csi_volumes.go:39
    [Testpattern: Dynamic PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should be able to unmount after the subpath directory is deleted
      test/e2e/storage/testsuites/subpath.go:438
------------------------------
{"msg":"PASSED [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted","total":-1,"completed":9,"skipped":50,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Inline-volume (default fs)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:19:03.364: INFO: Driver local doesn't support InlineVolume -- skipping
... skipping 28 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: hostPathSymlink]
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Dynamic PV (immediate binding)] topology
    test/e2e/storage/testsuites/base.go:126
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:191

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      test/e2e/storage/testsuites/base.go:180
------------------------------
... skipping 38 lines ...
      Distro debian doesn't support ntfs -- skipping

      test/e2e/storage/testsuites/base.go:191
------------------------------
S
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":107,"failed":0}
[BeforeEach] [sig-instrumentation] MetricsGrabber
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jul  7 08:18:58.033: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename metrics-grabber
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 37 lines ...
Jul  7 08:18:16.760: INFO: PersistentVolumeClaim pvc-7749r found but phase is Pending instead of Bound.
Jul  7 08:18:18.765: INFO: PersistentVolumeClaim pvc-7749r found and phase=Bound (2.119435872s)
Jul  7 08:18:18.765: INFO: Waiting up to 3m0s for PersistentVolume local-xrwtm to have phase Bound
Jul  7 08:18:18.825: INFO: PersistentVolume local-xrwtm found and phase=Bound (59.775873ms)
STEP: Creating pod pod-subpath-test-preprovisionedpv-9kjl
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 08:18:18.918: INFO: Waiting up to 5m0s for pod "pod-subpath-test-preprovisionedpv-9kjl" in namespace "provisioning-5761" to be "Succeeded or Failed"
Jul  7 08:18:18.959: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Pending", Reason="", readiness=false. Elapsed: 40.614678ms
Jul  7 08:18:21.053: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134747828s
Jul  7 08:18:23.137: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.219053238s
Jul  7 08:18:25.232: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.31386976s
Jul  7 08:18:27.265: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.34642281s
Jul  7 08:18:29.380: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.461589589s
... skipping 11 lines ...
Jul  7 08:18:54.115: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Running", Reason="", readiness=true. Elapsed: 35.196479883s
Jul  7 08:18:56.225: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Running", Reason="", readiness=true. Elapsed: 37.307167058s
Jul  7 08:18:58.297: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Running", Reason="", readiness=true. Elapsed: 39.378709854s
Jul  7 08:19:00.365: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Running", Reason="", readiness=true. Elapsed: 41.446681464s
Jul  7 08:19:02.388: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 43.47028406s
STEP: Saw pod success
Jul  7 08:19:02.389: INFO: Pod "pod-subpath-test-preprovisionedpv-9kjl" satisfied condition "Succeeded or Failed"
Jul  7 08:19:02.407: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-preprovisionedpv-9kjl container test-container-subpath-preprovisionedpv-9kjl: <nil>
STEP: delete the pod
Jul  7 08:19:02.508: INFO: Waiting for pod pod-subpath-test-preprovisionedpv-9kjl to disappear
Jul  7 08:19:02.559: INFO: Pod pod-subpath-test-preprovisionedpv-9kjl no longer exists
STEP: Deleting pod pod-subpath-test-preprovisionedpv-9kjl
Jul  7 08:19:02.560: INFO: Deleting pod "pod-subpath-test-preprovisionedpv-9kjl" in namespace "provisioning-5761"
... skipping 22 lines ...
  test/e2e/storage/in_tree_volumes.go:56
    [Testpattern: Pre-provisioned PV (default fs)] subPath
    test/e2e/storage/testsuites/base.go:126
      should support file as subpath [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:226
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]","total":-1,"completed":6,"skipped":47,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:19:05.773: INFO: Driver emptydir doesn't support PreprovisionedPV -- skipping
[AfterEach] [Testpattern: Pre-provisioned PV (ext4)] volumes
  test/e2e/framework/framework.go:175

... skipping 103 lines ...
• [SLOW TEST:35.995 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":16,"skipped":107,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:19:06.043: INFO: Only supported for node OS distro [gci ubuntu custom] (not debian)
... skipping 27 lines ...
  test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating pod pod-subpath-test-configmap-5v6c
STEP: Creating a pod to test atomic-volume-subpath
Jul  7 08:18:29.505: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5v6c" in namespace "subpath-7966" to be "Succeeded or Failed"
Jul  7 08:18:29.576: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Pending", Reason="", readiness=false. Elapsed: 70.999699ms
Jul  7 08:18:31.726: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220890827s
Jul  7 08:18:33.746: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240835867s
Jul  7 08:18:35.760: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.254549435s
Jul  7 08:18:37.952: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446621575s
Jul  7 08:18:40.014: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.509054448s
... skipping 8 lines ...
Jul  7 08:18:59.220: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Running", Reason="", readiness=true. Elapsed: 29.714826145s
Jul  7 08:19:01.318: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Running", Reason="", readiness=true. Elapsed: 31.812391043s
Jul  7 08:19:03.365: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Running", Reason="", readiness=true. Elapsed: 33.860118506s
Jul  7 08:19:05.391: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Running", Reason="", readiness=true. Elapsed: 35.88607612s
Jul  7 08:19:07.463: INFO: Pod "pod-subpath-test-configmap-5v6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.957621661s
STEP: Saw pod success
Jul  7 08:19:07.463: INFO: Pod "pod-subpath-test-configmap-5v6c" satisfied condition "Succeeded or Failed"
Jul  7 08:19:07.572: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-5v6c container test-container-subpath-configmap-5v6c: <nil>
STEP: delete the pod
Jul  7 08:19:08.234: INFO: Waiting for pod pod-subpath-test-configmap-5v6c to disappear
Jul  7 08:19:08.251: INFO: Pod pod-subpath-test-configmap-5v6c no longer exists
STEP: Deleting pod pod-subpath-test-configmap-5v6c
Jul  7 08:19:08.251: INFO: Deleting pod "pod-subpath-test-configmap-5v6c" in namespace "subpath-7966"
... skipping 8 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":37,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 36 lines ...
• [SLOW TEST:34.522 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":18,"skipped":102,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:19:10.438: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:175

... skipping 21 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
STEP: Creating configMap with name projected-configmap-test-volume-e703fa1e-7f36-4ac4-be48-98b2dcda70f3
STEP: Creating a pod to test consume configMaps
Jul  7 08:18:51.477: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413" in namespace "projected-519" to be "Succeeded or Failed"
Jul  7 08:18:51.490: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Pending", Reason="", readiness=false. Elapsed: 12.668751ms
Jul  7 08:18:53.623: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145844639s
Jul  7 08:18:55.654: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176788059s
Jul  7 08:18:57.693: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216423997s
Jul  7 08:18:59.792: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314607358s
Jul  7 08:19:01.812: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Pending", Reason="", readiness=false. Elapsed: 10.334675494s
Jul  7 08:19:03.856: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Pending", Reason="", readiness=false. Elapsed: 12.379416489s
Jul  7 08:19:05.950: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Pending", Reason="", readiness=false. Elapsed: 14.47266917s
Jul  7 08:19:08.031: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Pending", Reason="", readiness=false. Elapsed: 16.553625849s
Jul  7 08:19:10.210: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.732777639s
STEP: Saw pod success
Jul  7 08:19:10.210: INFO: Pod "pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413" satisfied condition "Succeeded or Failed"
Jul  7 08:19:10.296: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413 container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jul  7 08:19:11.080: INFO: Waiting for pod pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413 to disappear
Jul  7 08:19:11.325: INFO: Pod pod-projected-configmaps-798a0ba0-3a0c-42cf-bfde-26283b840413 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:175
... skipping 4 lines ...
• [SLOW TEST:20.398 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":137,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] GCP Volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 68 lines ...
• [SLOW TEST:12.649 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:597
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":3,"skipped":22,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes
  test/e2e/storage/testsuites/base.go:127
Jul  7 08:19:12.757: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 27 lines ...
  test/e2e/kubectl/kubectl.go:255
[It] should check if cluster-info dump succeeds
  test/e2e/kubectl/kubectl.go:1094
STEP: running cluster-info dump
Jul  7 08:19:06.467: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:35195 --kubeconfig=/root/.kube/kind-test-config cluster-info dump'
Jul  7 08:19:12.270: INFO: stderr: ""
Jul  7 08:19:12.367: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"selfLink\": \"/api/v1/nodes\",\n        \"resourceVersion\": \"20655\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane\",\n                \"selfLink\": \"/api/v1/nodes/kind-control-plane\",\n                \"uid\": \"1acd5c5f-0d34-41a0-9698-77f8ffbc1d30\",\n                \"resourceVersion\": \"15549\",\n                \"creationTimestamp\": \"2020-07-07T08:05:52Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-control-plane\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"annotations\": {\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                },\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubeadm\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-07-07T08:05:54Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:kubeadm.alpha.kubernetes.io/cri-socket\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:node-role.kubernetes.io/master\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-07-07T08:06:31Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:node.alpha.kubernetes.io/ttl\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:beta.kubernetes.io/arch\": {},\n                                    \"f:beta.kubernetes.io/os\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:podCIDR\": {},\n                                \"f:podCIDRs\": {\n                                    \".\": {},\n                                    \"v:\\\"10.244.0.0/24\\\"\": {}\n                                },\n                                \"f:taints\": {}\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-07-07T08:16:37Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \".\": {},\n                                    \"f:volumes.kubernetes.io/controller-managed-attach-detach\": {}\n                                },\n                                \"f:labels\": {\n                                    \".\": {},\n                                    \"f:kubernetes.io/arch\": {},\n                                    \"f:kubernetes.io/hostname\": {},\n                                    \"f:kubernetes.io/os\": {}\n                                }\n                            },\n                            \"f:status\": {\n                                \"f:addresses\": {\n                                    \".\": {},\n                                    \"k:{\\\"type\\\":\\\"Hostname\\\"}\": {\n                                        \".\": {},\n                                        \"f:address\": {},\n                                        \"f:type\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"InternalIP\\\"}\": {\n                                        \".\": {},\n                                        \"f:address\": {},\n                                        \"f:type\": {}\n                                    }\n                                },\n                                \"f:allocatable\": {\n                                    \".\": {},\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {}\n                                },\n                                \"f:capacity\": {\n                                    \".\": {},\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {}\n                                },\n                                \"f:conditions\": {\n                                    \".\": {},\n                                    \"k:{\\\"type\\\":\\\"DiskPressure\\\"}\": {\n                                        \".\": {},\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\": {\n                                        \".\": {},\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"PIDPressure\\\"}\": {\n                                        \".\": {},\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \".\": {},\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {}\n                                    }\n                                },\n                                \"f:daemonEndpoints\": {\n                                    \"f:kubeletEndpoint\": {\n                                        \"f:Port\": {}\n                                    }\n                                },\n                                \"f:images\": {},\n                                \"f:nodeInfo\": {\n                                    \"f:architecture\": {},\n                                    \"f:bootID\": {},\n                                    \"f:containerRuntimeVersion\": {},\n                                    \"f:kernelVersion\": {},\n                                    \"f:kubeProxyVersion\": {},\n                                    \"f:kubeletVersion\": {},\n                                    \"f:machineID\": {},\n                                    \"f:operatingSystem\": {},\n                                    \"f:osImage\": {},\n                                    \"f:systemUUID\": {}\n                                }\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.0.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.0.0/24\"\n                ],\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-07-07T08:16:37Z\",\n                        \"lastTransitionTime\": \"2020-07-07T08:05:51Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-07-07T08:16:37Z\",\n                        \"lastTransitionTime\": \"2020-07-07T08:05:51Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-07-07T08:16:37Z\",\n                        \"lastTransitionTime\": \"2020-07-07T08:05:51Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2020-07-07T08:16:37Z\",\n                        \"lastTransitionTime\": \"2020-07-07T08:06:31Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.18.0.3\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-control-plane\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"db8bedd0c7aa4d0da36bd02d09474b4b\",\n                    \"systemUUID\": \"93606c0a-ce71-4adb-9aad-d9e2feffef01\",\n                    \"bootID\": \"49692b2c-f58d-4d2b-81b9-c12e37eddbb9\",\n                    \"kernelVersion\": \"4.15.0-1044-gke\",\n                    \"osImage\": \"Ubuntu 20.04 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.0-beta.1-55-gc7518074\",\n                    \"kubeletVersion\": \"v1.19.0-beta.2.778+3615291cb3ef45\",\n                    \"kubeProxyVersion\": \"v1.19.0-beta.2.778+3615291cb3ef45\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.7-0\"\n                        ],\n                        \"sizeBytes\": 299470271\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.19.0-beta.2.778_3615291cb3ef45\"\n                        ],\n                        \"sizeBytes\": 151674724\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.19.0-beta.2.778_3615291cb3ef45\"\n                        ],\n                        \"sizeBytes\": 136273773\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.19.0-beta.2.778_3615291cb3ef45\"\n                        ],\n                        \"sizeBytes\": 127784441\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:v20200619-15f5b3ab\"\n                        ],\n                        \"sizeBytes\": 120473968\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.19.0-beta.2.778_3615291cb3ef45\"\n                        ],\n                        \"sizeBytes\": 56504163\n                    },\n                    {\n                        \"names\": [\n                            \"us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 53876619\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.7\"\n                        ],\n                        \"sizeBytes\": 43921887\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.12\"\n                        ],\n                        \"sizeBytes\": 41994847\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 685724\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker\",\n                \"selfLink\": \"/api/v1/nodes/kind-worker\",\n                \"uid\": \"5db735e4-e768-488a-af19-9d9f5f89ae5e\",\n                \"resourceVersion\": \"20345\",\n                \"creationTimestamp\": \"2020-07-07T08:06:38Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"topology.hostpath.csi/node\": \"kind-worker\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-4351\\\":\\\"kind-worker\\\",\\\"csi-hostpath-ephemeral-8952\\\":\\\"kind-worker\\\",\\\"csi-hostpath-ephemeral-9720\\\":\\\"kind-worker\\\",\\\"csi-hostpath-provisioning-3249\\\":\\\"kind-worker\\\",\\\"csi-hostpath-provisioning-9150\\\":\\\"kind-worker\\\",\\\"csi-hostpath-volume-6332\\\":\\\"kind-worker\\\",\\\"csi-mock-csi-mock-volumes-1668\\\":\\\"csi-mock-csi-mock-volumes-1668\\\",\\\"csi-mock-csi-mock-volumes-5556\\\":\\\"csi-mock-csi-mock-volumes-5556\\\",\\\"csi-mock-csi-mock-volumes-9793\\\":\\\"csi-mock-csi-mock-volumes-9793\\\"}\",\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                },\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubeadm\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-07-07T08:06:38Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:kubeadm.alpha.kubernetes.io/cri-socket\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kube-controller-manager\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-07-07T08:18:46Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:node.alpha.kubernetes.io/ttl\": {}\n                                },\n                                \"f:labels\": {\n                                    \"f:beta.kubernetes.io/arch\": {},\n                                    \"f:beta.kubernetes.io/os\": {}\n                                }\n                            },\n                            \"f:spec\": {\n                                \"f:podCIDR\": {},\n                                \"f:podCIDRs\": {\n                                    \".\": {},\n                                    \"v:\\\"10.244.1.0/24\\\"\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-07-07T08:18:56Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \".\": {},\n                                    \"f:csi.volume.kubernetes.io/nodeid\": {},\n                                    \"f:volumes.kubernetes.io/controller-managed-attach-detach\": {}\n                                },\n                                \"f:labels\": {\n                                    \".\": {},\n                                    \"f:kubernetes.io/arch\": {},\n                                    \"f:kubernetes.io/hostname\": {},\n                                    \"f:kubernetes.io/os\": {},\n                                    \"f:topology.hostpath.csi/node\": {}\n                                }\n                            },\n                            \"f:status\": {\n                                \"f:addresses\": {\n                                    \".\": {},\n                                    \"k:{\\\"type\\\":\\\"Hostname\\\"}\": {\n                                        \".\": {},\n                                        \"f:address\": {},\n                                        \"f:type\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"InternalIP\\\"}\": {\n                                        \".\": {},\n                                        \"f:address\": {},\n                                        \"f:type\": {}\n                                    }\n                                },\n                                \"f:allocatable\": {\n                                    \".\": {},\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {}\n                                },\n                                \"f:capacity\": {\n                                    \".\": {},\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {}\n                                },\n                                \"f:conditions\": {\n                                    \".\": {},\n                                    \"k:{\\\"type\\\":\\\"DiskPressure\\\"}\": {\n                                        \".\": {},\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"MemoryPressure\\\"}\": {\n                                        \".\": {},\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"PIDPressure\\\"}\": {\n                                        \".\": {},\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                        \".\": {},\n                                        \"f:lastHeartbeatTime\": {},\n                                        \"f:lastTransitionTime\": {},\n                                        \"f:message\": {},\n                                        \"f:reason\": {},\n                                        \"f:status\": {},\n                                        \"f:type\": {}\n                                    }\n                                },\n                                \"f:daemonEndpoints\": {\n                                    \"f:kubeletEndpoint\": {\n                                        \"f:Port\": {}\n                                    }\n                                },\n                                \"f:images\": {},\n                                \"f:nodeInfo\": {\n                                    \"f:architecture\": {},\n                                    \"f:bootID\": {},\n                                    \"f:containerRuntimeVersion\": {},\n                                    \"f:kernelVersion\": {},\n                                    \"f:kubeProxyVersion\": {},\n                                    \"f:kubeletVersion\": {},\n                                    \"f:machineID\": {},\n                                    \"f:operatingSystem\": {},\n                                    \"f:osImage\": {},\n                                    \"f:systemUUID\": {}\n                                }\n                            }\n                        }\n                    }\n                ]\n            },\n            \"spec\": {\n                \"podCIDR\": \"10.244.1.0/24\",\n                \"podCIDRs\": [\n                    \"10.244.1.0/24\"\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"253882800Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53582972Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-07-07T08:18:56Z\",\n                        \"lastTransitionTime\": \"2020-07-07T08:06:38Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-07-07T08:18:56Z\",\n                        \"lastTransitionTime\": \"2020-07-07T08:06:38Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2020-07-07T08:18:56Z\",\n                        \"lastTransitionTime\": \"2020-07-07T08:06:38Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2020-07-07T08:18:56Z\",\n                        \"lastTransitionTime\": \"2020-07-07T08:07:08Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"172.18.0.4\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-worker\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"f38bb438b21a4071a9d06f6bf0b5376a\",\n                    \"systemUUID\": \"784c18c7-bc86-448d-821b-7d6e90044add\",\n                    \"bootID\": \"49692b2c-f58d-4d2b-81b9-c12e37eddbb9\",\n                    \"kernelVersion\": \"4.15.0-1044-gke\",\n                    \"osImage\": \"Ubuntu 20.04 LTS\",\n                    \"containerRuntimeVersion\": \"containerd://1.4.0-beta.1-55-gc7518074\",\n                    \"kubeletVersion\": \"v1.19.0-beta.2.778+3615291cb3ef45\",\n                    \"kubeProxyVersion\": \"v1.19.0-beta.2.778+3615291cb3ef45\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.7-0\"\n                        ],\n                        \"sizeBytes\": 299470271\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.19.0-beta.2.778_3615291cb3ef45\"\n                        ],\n                        \"sizeBytes\": 151674724\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/nfs-provisioner@sha256:c1bedac8758029948afe060bf8f6ee63ea489b5e08d29745f44fab68ee0d46ca\",\n                            \"gcr.io/k8s-staging-csi/nfs-provisioner:v2.2.2\"\n                        ],\n                        \"sizeBytes\": 138177747\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.19.0-beta.2.778_3615291cb3ef45\"\n                        ],\n                        \"sizeBytes\": 136273773\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.19.0-beta.2.778_3615291cb3ef45\"\n                        ],\n                        \"sizeBytes\": 127784441\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:v20200619-15f5b3ab\"\n                        ],\n                        \"sizeBytes\": 120473968\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71\",\n                            \"gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0\"\n                        ],\n                        \"sizeBytes\": 82348896\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.19.0-beta.2.778_3615291cb3ef45\"\n                        ],\n                        \"sizeBytes\": 56504163\n                    },\n                    {\n                        \"names\": [\n                            \"us.gcr.io/k8s-artifacts-prod/build-image/debian-base:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 53876619\n                    },\n                    {\n                        \"names\": [\n                            \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\",\n                            \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20\"\n                        ],\n                        \"sizeBytes\": 46251412\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns:1.6.7\"\n                        ],\n                        \"sizeBytes\": 43921887\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.12\"\n                        ],\n                        \"sizeBytes\": 41994847\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a\",\n                            \"docker.io/library/httpd:2.4.39-alpine\"\n                        ],\n                        \"sizeBytes\": 41901429\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                            \"docker.io/library/httpd:2.4.38-alpine\"\n                        ],\n                        \"sizeBytes\": 40765017\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/csi-provisioner@sha256:b3f13b7636b1da131aef57c8b3a78b6e26c15cde74bfb8973ddb84dd724ea31b\",\n                            \"gcr.io/k8s-staging-csi/csi-provisioner:v2.0.0-rc1\"\n                        ],\n                        \"sizeBytes\": 19415418\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/csi-provisioner@sha256:78e3393f5fd5ff6c1e5dada2478cfa456fb7164929e573cf9a87bf6532730679\",\n                            \"gcr.io/k8s-staging-csi/csi-provisioner:v1.6.0\"\n                        ],\n                        \"sizeBytes\": 19408504\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/csi-attacher@sha256:8fcb9472310dd424c4da8ee06ff200b5e6f091dff39a079e470599e4d0dcf328\",\n                            \"gcr.io/k8s-staging-csi/csi-attacher:v3.0.0-rc1\"\n                        ],\n                        \"sizeBytes\": 18637792\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/csi-snapshotter@sha256:35ead85dd09aa8cc612fdb598d4e0e2f048bef816f1b74df5eeab67cd21b10aa\",\n                            \"gcr.io/k8s-staging-csi/csi-snapshotter:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 18487038\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/csi-attacher@sha256:2ffa647e8107cfd39e5f464e738dce014c9f5e51b108da36c3ab621048d0bbab\",\n                            \"gcr.io/k8s-staging-csi/csi-attacher:v2.2.0\"\n                        ],\n                        \"sizeBytes\": 18451536\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/csi-resizer@sha256:43195976fb9f94d943f5dd9d58b8afa543be22d09a1165e8a489b7dfe22c657a\",\n                            \"gcr.io/k8s-staging-csi/csi-resizer:v0.4.0\"\n                        ],\n                        \"sizeBytes\": 18422462\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/csi-resizer@sha256:6c6a0332693a7c456378f6abd2bb40611826c1e1a733cadbdae2daab3125b71c\",\n                            \"gcr.io/k8s-staging-csi/csi-resizer:v0.5.0\"\n                        ],\n                        \"sizeBytes\": 18412631\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/hostpathplugin@sha256:aa223f9df8c1d477a9f2a4a2a7d104561e6d365e54671aacbc770dffcc0683ad\",\n                            \"gcr.io/k8s-staging-csi/hostpathplugin:v1.4.0-rc2\"\n                        ],\n                        \"sizeBytes\": 13210408\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19\",\n                            \"gcr.io/kubernetes-e2e-test-images/echoserver:2.2\"\n                        ],\n                        \"sizeBytes\": 10198788\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/mock-driver@sha256:0b4273abac4c241fa3d70aaf52b0d79a133d2737081f4a5c5dea4949f6c45dc3\",\n                            \"gcr.io/k8s-staging-csi/mock-driver:v3.1.0\"\n                        ],\n                        \"sizeBytes\": 8761232\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/csi-node-driver-registrar@sha256:9622c6a6dac7499a055a382930f4de82905a3c5735c0753f7094115c9c871309\",\n                            \"gcr.io/k8s-staging-csi/csi-node-driver-registrar:v1.3.0\"\n                        ],\n                        \"sizeBytes\": 7717137\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/csi-node-driver-registrar@sha256:273175c272162d480d06849e09e6e3cdb0245239e3a82df6630df3bc059c6571\",\n                            \"gcr.io/k8s-staging-csi/csi-node-driver-registrar:v1.2.0\"\n                        ],\n                        \"sizeBytes\": 7676865\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                            \"docker.io/library/nginx:1.14-alpine\"\n                        ],\n                        \"sizeBytes\": 6978806\n                    },\n                    {\n                        \"names\": [\n                            \"gcr.io/k8s-staging-csi/livenessprobe@sha256:f8cec70adc74897ddde5da4f1da0209a497370eaf657566e2b36bc5f0f3ccbd7\",\n                            \"gcr.io/k8s-staging-csi/livenessprobe:v1.1.0\"\n                        ],\n                        \"sizeBytes\": 6691212\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796\",\n                            \"docker.io/library/busybox:1.29\"\n                        ],\n                        \"sizeBytes\": 732685\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.2\"\n                        ],\n                        \"sizeBytes\": 685724\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker2\",\n                \"selfLink\": \"/api/v1/nodes/kind-worker2\",\n                \"uid\": \"28ffc4e2-0613-4d3d-b51d-b0fb9ecec3c8\",\n                \"resourceVersion\": \"20528\",\n                \"creationTimestamp\": \"2020-07-07T08:06:38Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker2\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"topology.hostpath.csi/node\": \"kind-worker2\"\n                },\n                \"annotations\": {\n                    \"csi.volume.kubernetes.io/nodeid\": \"{\\\"csi-hostpath-ephemeral-8415\\\":\\\"kind-worker2\\\",\\\"csi-hostpath-volume-expand-5129\\\":\\\"kind-worker2\\\",\\\"csi-hostpath-volume-expand-5652\\\":\\\"kind-worker2\\\",\\\"csi-hostpath-volumemode-3785\\\":\\\"kind-worker2\\\",\\\"csi-mock-csi-mock-volumes-2772\\\":\\\"csi-mock-csi-mock-volumes-2772\\\",\\\"csi-mock-csi-mock-volumes-4912\\\":\\\"csi-mock-csi-mock-volumes-4912\\\"}\",\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                },\n                \"managedFields\": [\n                    {\n                        \"manager\": \"kubeadm\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-07-07T08:06:38Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \"f:kubeadm.alpha.kubernetes.io/cri-socket\": {}\n                                }\n                            }\n                        }\n                    },\n                    {\n                        \"manager\": \"kubelet\",\n                        \"operation\": \"Update\",\n                        \"apiVersion\": \"v1\",\n                        \"time\": \"2020-07-07T08:19:00Z\",\n                        \"fieldsType\": \"FieldsV1\",\n                        \"fieldsV1\": {\n                            \"f:metadata\": {\n                                \"f:annotations\": {\n                                    \".\": {},\n                                    \"f:csi.volume.kubernetes.io/nodeid\": {},\n                                    \"f:volumes.kubernetes.io/controller-managed-attach-detach\": {}\n                                },\n                                \"f:labels\": {\n                                    \".\": {},\n                                    \"f:kubernetes.io/arch\": {},\n                                    \"f:kubernetes.io/hostname\": {},\n                                    \"f:kubernetes.io/os\": {},\n                                    \"f:topology.hostpath.csi/node\": {}\n                                }\n                            },\n                            \"f:status\": {\n                                \"f:addresses\": {\n                                    \".\": {},\n                                    \"k:{\\\"type\\\":\\\"Hostname\\\"}\": {\n                                        \".\": {},\n                                        \"f:address\": {},\n                                        \"f:type\": {}\n                                    },\n                                    \"k:{\\\"type\\\":\\\"InternalIP\\\"}\": {\n                                        \".\": {},\n                                        \"f:address\": {},\n                                        \"f:type\": {}\n                                    }\n                                },\n                                \"f:allocatable\": {\n                                    \".\": {},\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {}\n                                },\n                                \"f:capacity\": {\n                                    \".\": {},\n                                    \"f:cpu\": {},\n                                    \"f:ephemeral-storage\": {},\n                                    \"f:hugepages-1Gi\": {},\n                                    \"f:hugepages-2Mi\": {},\n                                    \"f:memory\": {},\n                                    \"f:pods\": {}\n                                },\n                                \"f:conms):\nTrace[1933922109]: [1.787173424s] [1.787173424s] END\nI0707 08:17:55.050187       1 trace.go:201] Trace[1214947419]: \"Delete\" url:/api/v1/namespaces/provisioning-5520/events (07-Jul-2020 08:17:00.888) (total time: 2161ms):\nTrace[1214947419]: [2.161688381s] [2.161688381s] END\nI0707 08:17:57.349336       1 trace.go:201] Trace[1820268201]: \"Delete\" url:/api/v1/namespaces/disruption-8207/events (07-Jul-2020 08:17:00.869) (total time: 1479ms):\nTrace[1820268201]: [1.479998494s] [1.479998494s] END\nI0707 08:17:58.219952       1 trace.go:201] Trace[2136027403]: \"Get\" url:/api/v1/namespaces/ephemeral-744/pods/csi-hostpath-snapshotter-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume,client:172.18.0.1 (07-Jul-2020 08:16:00.394) (total time: 84825ms):\nTrace[2136027403]: ---\"Transformed response object\" 84809ms (08:17:00.219)\nTrace[2136027403]: [1m24.825468704s] [1m24.825468704s] END\nI0707 08:17:58.220201       1 trace.go:201] Trace[722415939]: \"Get\" url:/api/v1/namespaces/ephemeral-744/pods/csi-hostpathplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume,client:172.18.0.1 (07-Jul-2020 08:16:00.129) (total time: 79090ms):\nTrace[722415939]: ---\"About to write a response\" 194ms (08:16:00.324)\nTrace[722415939]: ---\"Transformed response object\" 78896ms (08:17:00.220)\nTrace[722415939]: [1m19.090997268s] [1m19.090997268s] END\nI0707 08:17:58.220272       1 trace.go:201] Trace[1404953338]: \"Get\" url:/api/v1/namespaces/ephemeral-744/pods/csi-hostpathplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume,client:172.18.0.1 (07-Jul-2020 08:16:00.115) (total time: 79104ms):\nTrace[1404953338]: ---\"Transformed response object\" 79094ms (08:17:00.220)\nTrace[1404953338]: [1m19.104334699s] [1m19.104334699s] END\nI0707 08:17:58.220379       1 trace.go:201] Trace[1355352528]: \"Get\" url:/api/v1/namespaces/ephemeral-744/pods/csi-hostpathplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume,client:172.18.0.1 (07-Jul-2020 08:16:00.439) (total time: 78780ms):\nTrace[1355352528]: ---\"About to write a response\" 214ms (08:16:00.653)\nTrace[1355352528]: ---\"Transformed response object\" 78566ms (08:17:00.220)\nTrace[1355352528]: [1m18.78077563s] [1m18.78077563s] END\nI0707 08:17:58.220506       1 trace.go:201] Trace[447058744]: \"Get\" url:/api/v1/namespaces/ephemeral-744/pods/csi-hostpath-provisioner-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume,client:172.18.0.1 (07-Jul-2020 08:16:00.852) (total time: 96368ms):\nTrace[447058744]: ---\"About to write a response\" 518ms (08:16:00.370)\nTrace[447058744]: ---\"Transformed response object\" 95850ms (08:17:00.220)\nTrace[447058744]: [1m36.368451734s] [1m36.368451734s] END\nI0707 08:17:58.220542       1 trace.go:201] Trace[603137523]: \"Get\" url:/api/v1/namespaces/ephemeral-744/pods/csi-hostpath-attacher-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume,client:172.18.0.1 (07-Jul-2020 08:16:00.519) (total time: 99700ms):\nTrace[603137523]: ---\"Transformed response object\" 99615ms (08:17:00.220)\nTrace[603137523]: [1m39.700775427s] [1m39.700775427s] END\nI0707 08:17:58.220741       1 trace.go:201] Trace[417206991]: \"Get\" url:/api/v1/namespaces/ephemeral-744/pods/csi-hostpath-resizer-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume,client:172.18.0.1 (07-Jul-2020 08:16:00.023) (total time: 91197ms):\nTrace[417206991]: ---\"Transformed response object\" 91175ms (08:17:00.220)\nTrace[417206991]: [1m31.197490968s] [1m31.197490968s] END\nI0707 08:17:58.928206       1 trace.go:201] Trace[2055556865]: \"Delete\" url:/api/v1/namespaces/ephemeral-4497/events (07-Jul-2020 08:17:00.510) (total time: 3417ms):\nTrace[2055556865]: [3.417696227s] [3.417696227s] END\nI0707 08:18:02.273983       1 trace.go:201] Trace[1758140909]: \"Delete\" url:/api/v1/namespaces/kubectl-9461/events (07-Jul-2020 08:18:00.537) (total time: 736ms):\nTrace[1758140909]: [736.330816ms] [736.330816ms] END\nI0707 08:18:03.065571       1 trace.go:201] Trace[689371549]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/csi-mock-volumes-5556/events (07-Jul-2020 08:18:00.364) (total time: 700ms):\nTrace[689371549]: [700.892131ms] [700.892131ms] END\nI0707 08:18:03.521514       1 trace.go:201] Trace[1386894791]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/statefulset-4810/events (07-Jul-2020 08:17:00.937) (total time: 5583ms):\nTrace[1386894791]: [5.583644246s] [5.583644246s] END\nI0707 08:18:05.870118       1 trace.go:201] Trace[1861411874]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/ephemeral-744/events (07-Jul-2020 08:18:00.522) (total time: 2348ms):\nTrace[1861411874]: [2.348047162s] [2.348047162s] END\nI0707 08:18:07.003106       1 trace.go:201] Trace[1186163109]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-9806/pods/csi-mockplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true,client:172.18.0.1 (07-Jul-2020 08:17:00.654) (total time: 64348ms):\nTrace[1186163109]: ---\"Transformed response object\" 64306ms (08:18:00.003)\nTrace[1186163109]: [1m4.348087581s] [1m4.348087581s] END\nI0707 08:18:07.003545       1 trace.go:201] Trace[1312805657]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-9806/pods/csi-mockplugin-attacher-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true,client:172.18.0.1 (07-Jul-2020 08:16:00.191) (total time: 95811ms):\nTrace[1312805657]: ---\"Transformed response object\" 95752ms (08:18:00.003)\nTrace[1312805657]: [1m35.811911178s] [1m35.811911178s] END\nI0707 08:18:07.003664       1 trace.go:201] Trace[1172696868]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-9806/pods/csi-mockplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true,client:172.18.0.1 (07-Jul-2020 08:17:00.613) (total time: 64390ms):\nTrace[1172696868]: ---\"Transformed response object\" 64376ms (08:18:00.003)\nTrace[1172696868]: [1m4.390264044s] [1m4.390264044s] END\nI0707 08:18:07.003550       1 trace.go:201] Trace[1229542678]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-9806/pods/csi-mockplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true,client:172.18.0.1 (07-Jul-2020 08:17:00.427) (total time: 64576ms):\nTrace[1229542678]: ---\"About to write a response\" 123ms (08:17:00.551)\nTrace[1229542678]: ---\"Transformed response object\" 64452ms (08:18:00.003)\nTrace[1229542678]: [1m4.576107285s] [1m4.576107285s] END\nI0707 08:18:11.917143       1 trace.go:201] Trace[914438366]: \"Delete\" url:/api/v1/namespaces/provisioning-3249/events (07-Jul-2020 08:18:00.315) (total time: 601ms):\nTrace[914438366]: [601.36862ms] [601.36862ms] END\nI0707 08:18:15.969427       1 trace.go:201] Trace[1725818750]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/csi-mock-volumes-9806/events (07-Jul-2020 08:18:00.302) (total time: 2666ms):\nTrace[1725818750]: [2.666467349s] [2.666467349s] END\nE0707 08:18:16.857113       1 upgradeaware.go:377] Error proxying data from backend to client: write tcp 172.18.0.3:6443->172.18.0.1:46768: write: broken pipe\nE0707 08:18:16.857562       1 upgradeaware.go:363] Error proxying data from client to backend: write tcp 172.18.0.3:47454->172.18.0.4:10250: write: broken pipe\nE0707 08:18:17.193789       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]\nE0707 08:18:17.194947       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]\nE0707 08:18:18.242395       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]\nE0707 08:18:18.242909       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]\nI0707 08:18:18.454324       1 trace.go:201] Trace[14393796]: \"Delete\" url:/api/v1/namespaces/emptydir-wrapper-9366/events (07-Jul-2020 08:18:00.875) (total time: 579ms):\nTrace[14393796]: [579.261305ms] [579.261305ms] END\nI0707 08:18:19.042279       1 trace.go:201] Trace[929589946]: \"Get\" url:/api/v1/namespaces/provisioning-9859/pods/csi-hostpathplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted,client:172.18.0.1 (07-Jul-2020 08:17:00.110) (total time: 70931ms):\nTrace[929589946]: ---\"About to write a response\" 333ms (08:17:00.444)\nTrace[929589946]: ---\"Transformed response object\" 70597ms (08:18:00.041)\nTrace[929589946]: [1m10.931000327s] [1m10.931000327s] END\nI0707 08:18:19.043526       1 trace.go:201] Trace[1369197692]: \"Get\" url:/api/v1/namespaces/provisioning-9859/pods/csi-hostpath-attacher-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted,client:172.18.0.1 (07-Jul-2020 08:16:00.544) (total time: 83498ms):\nTrace[1369197692]: ---\"Transformed response object\" 83484ms (08:18:00.042)\nTrace[1369197692]: [1m23.498309595s] [1m23.498309595s] END\nI0707 08:18:19.042665       1 trace.go:201] Trace[927469320]: \"Get\" url:/api/v1/namespaces/provisioning-9859/pods/csi-hostpath-resizer-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted,client:172.18.0.1 (07-Jul-2020 08:16:00.362) (total time: 79680ms):\nTrace[927469320]: ---\"About to write a response\" 257ms (08:16:00.620)\nTrace[927469320]: ---\"Transformed response object\" 79422ms (08:18:00.042)\nTrace[927469320]: [1m19.680405978s] [1m19.680405978s] END\nI0707 08:18:19.042836       1 trace.go:201] Trace[738381084]: \"Get\" url:/api/v1/namespaces/provisioning-9859/pods/csi-hostpathplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted,client:172.18.0.1 (07-Jul-2020 08:17:00.861) (total time: 71181ms):\nTrace[738381084]: ---\"About to write a response\" 118ms (08:17:00.979)\nTrace[738381084]: ---\"Transformed response object\" 71063ms (08:18:00.042)\nTrace[738381084]: [1m11.181534075s] [1m11.181534075s] END\nI0707 08:18:19.042882       1 trace.go:201] Trace[1653741025]: \"Get\" url:/api/v1/namespaces/provisioning-9859/pods/csi-hostpath-provisioner-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted,client:172.18.0.1 (07-Jul-2020 08:16:00.073) (total time: 79969ms):\nTrace[1653741025]: ---\"About to write a response\" 133ms (08:16:00.207)\nTrace[1653741025]: ---\"Transformed response object\" 79835ms (08:18:00.042)\nTrace[1653741025]: [1m19.96954469s] [1m19.96954469s] END\nI0707 08:18:19.042981       1 trace.go:201] Trace[635817078]: \"Get\" url:/api/v1/namespaces/provisioning-9859/pods/csi-hostpathplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted,client:172.18.0.1 (07-Jul-2020 08:17:00.273) (total time: 71769ms):\nTrace[635817078]: ---\"About to write a response\" 404ms (08:17:00.677)\nTrace[635817078]: ---\"Transformed response object\" 71365ms (08:18:00.042)\nTrace[635817078]: [1m11.769208759s] [1m11.769208759s] END\nI0707 08:18:19.043093       1 trace.go:201] Trace[1396998295]: \"Get\" url:/api/v1/namespaces/provisioning-9859/pods/csi-hostpath-snapshotter-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted,client:172.18.0.1 (07-Jul-2020 08:17:00.557) (total time: 76485ms):\nTrace[1396998295]: ---\"Transformed response object\" 76475ms (08:18:00.043)\nTrace[1396998295]: [1m16.48599557s] [1m16.48599557s] END\nI0707 08:18:22.281283       1 client.go:360] parsed scheme: \"passthrough\"\nI0707 08:18:22.281462       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}\nI0707 08:18:22.281555       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nE0707 08:18:22.426439       1 upgradeaware.go:363] Error proxying data from client to backend: write tcp 172.18.0.3:47594->172.18.0.4:10250: write: broken pipe\nI0707 08:18:23.370369       1 trace.go:201] Trace[1887675526]: \"Update\" url:/api/v1/namespaces/csi-mock-volumes-2772/persistentvolumeclaims/pvc-htdhs,user-agent:kube-controller-manager/v1.19.0 (linux/amd64) kubernetes/3615291/system:serviceaccount:kube-system:pvc-protection-controller,client:172.18.0.3 (07-Jul-2020 08:18:00.673) (total time: 696ms):\nTrace[1887675526]: ---\"Object stored in database\" 696ms (08:18:00.370)\nTrace[1887675526]: [696.776418ms] [696.776418ms] END\nI0707 08:18:23.730632       1 trace.go:201] Trace[1111225097]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/persistent-local-volumes-test-6769/events (07-Jul-2020 08:18:00.273) (total time: 1457ms):\nTrace[1111225097]: [1.457338129s] [1.457338129s] END\nI0707 08:18:26.012287       1 trace.go:201] Trace[60311378]: \"Delete\" url:/api/v1/namespaces/persistent-local-volumes-test-6769/secrets (07-Jul-2020 08:18:00.427) (total time: 584ms):\nTrace[60311378]: [584.402737ms] [584.402737ms] END\nI0707 08:18:29.503482       1 trace.go:201] Trace[462617661]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/provisioning-3770/events (07-Jul-2020 08:18:00.330) (total time: 2172ms):\nTrace[462617661]: [2.172966837s] [2.172966837s] END\nI0707 08:18:31.227101       1 trace.go:201] Trace[250843216]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/provisioning-9859/events (07-Jul-2020 08:18:00.513) (total time: 4713ms):\nTrace[250843216]: [4.713736852s] [4.713736852s] END\nI0707 08:18:32.088934       1 trace.go:201] Trace[1660506565]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/csi-mock-volumes-2772/events (07-Jul-2020 08:18:00.299) (total time: 1789ms):\nTrace[1660506565]: [1.789331215s] [1.789331215s] END\nI0707 08:18:33.004376       1 trace.go:201] Trace[1170048539]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/persistent-local-volumes-test-3715/events (07-Jul-2020 08:18:00.465) (total time: 538ms):\nTrace[1170048539]: [538.773939ms] [538.773939ms] END\nI0707 08:18:36.562229       1 trace.go:201] Trace[7067906]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/container-runtime-4556/events (07-Jul-2020 08:18:00.604) (total time: 957ms):\nTrace[7067906]: [957.335745ms] [957.335745ms] END\nI0707 08:18:38.659260       1 trace.go:201] Trace[1293620152]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-5717/pods/csi-mockplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present,client:172.18.0.1 (07-Jul-2020 08:17:00.586) (total time: 72072ms):\nTrace[1293620152]: ---\"Transformed response object\" 72015ms (08:18:00.659)\nTrace[1293620152]: [1m12.072474493s] [1m12.072474493s] END\nI0707 08:18:38.659470       1 trace.go:201] Trace[1943962584]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-5717/pods/csi-mockplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present,client:172.18.0.1 (07-Jul-2020 08:17:00.765) (total time: 71893ms):\nTrace[1943962584]: ---\"Transformed response object\" 71844ms (08:18:00.659)\nTrace[1943962584]: [1m11.893891499s] [1m11.893891499s] END\nI0707 08:18:38.659655       1 trace.go:201] Trace[1529412260]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-5717/pods/csi-mockplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present,client:172.18.0.1 (07-Jul-2020 08:17:00.657) (total time: 72002ms):\nTrace[1529412260]: ---\"Transformed response object\" 71969ms (08:18:00.659)\nTrace[1529412260]: [1m12.002575711s] [1m12.002575711s] END\nI0707 08:18:38.659858       1 trace.go:201] Trace[145663627]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-5717/pods/csi-mockplugin-attacher-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present,client:172.18.0.1 (07-Jul-2020 08:13:00.213) (total time: 303446ms):\nTrace[145663627]: ---\"Transformed response object\" 303424ms (08:18:00.659)\nTrace[145663627]: [5m3.446129819s] [5m3.446129819s] END\nI0707 08:18:40.462169       1 trace.go:201] Trace[1446594378]: \"Get\" url:/api/v1/namespaces/pods-1962/pods/pod-submit-status-2-1,user-agent:kubelet/v1.19.0 (linux/amd64) kubernetes/3615291,client:172.18.0.4 (07-Jul-2020 08:18:00.893) (total time: 569ms):\nTrace[1446594378]: ---\"About to write a response\" 568ms (08:18:00.461)\nTrace[1446594378]: [569.036092ms] [569.036092ms] END\nI0707 08:18:40.544361       1 trace.go:201] Trace[183312825]: \"Create\" url:/api/v1/namespaces/volumemode-3785/events,user-agent:kube-controller-manager/v1.19.0 (linux/amd64) kubernetes/3615291/system:serviceaccount:kube-system:persistent-volume-binder,client:172.18.0.3 (07-Jul-2020 08:18:00.029) (total time: 514ms):\nTrace[183312825]: ---\"Object stored in database\" 514ms (08:18:00.544)\nTrace[183312825]: [514.513959ms] [514.513959ms] END\nI0707 08:18:40.545019       1 trace.go:201] Trace[728379514]: \"List etcd3\" key:/ingress/disruption-8207,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Jul-2020 08:18:00.029) (total time: 515ms):\nTrace[728379514]: [515.102036ms] [515.102036ms] END\nI0707 08:18:40.545101       1 trace.go:201] Trace[688657511]: \"Delete\" url:/apis/networking.k8s.io/v1/namespaces/disruption-8207/ingresses (07-Jul-2020 08:18:00.029) (total time: 515ms):\nTrace[688657511]: [515.568314ms] [515.568314ms] END\nI0707 08:18:40.545174       1 trace.go:201] Trace[1445652766]: \"List etcd3\" key:/secrets/ephemeral-744,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Jul-2020 08:18:00.032) (total time: 512ms):\nTrace[1445652766]: [512.431225ms] [512.431225ms] END\nI0707 08:18:40.545252       1 trace.go:201] Trace[1742999508]: \"Delete\" url:/api/v1/namespaces/ephemeral-744/secrets (07-Jul-2020 08:18:00.032) (total time: 512ms):\nTrace[1742999508]: [512.66293ms] [512.66293ms] END\nI0707 08:18:40.545275       1 trace.go:201] Trace[1027573902]: \"List etcd3\" key:/pods/container-runtime-4556,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Jul-2020 08:18:00.029) (total time: 515ms):\nTrace[1027573902]: [515.538893ms] [515.538893ms] END\nI0707 08:18:40.545363       1 trace.go:201] Trace[2093365999]: \"List\" url:/api/v1/namespaces/container-runtime-4556/pods,user-agent:kube-controller-manager/v1.19.0 (linux/amd64) kubernetes/3615291/system:serviceaccount:kube-system:namespace-controller,client:172.18.0.3 (07-Jul-2020 08:18:00.029) (total time: 515ms):\nTrace[2093365999]: ---\"Listing from storage done\" 515ms (08:18:00.545)\nTrace[2093365999]: [515.647514ms] [515.647514ms] END\nI0707 08:18:40.546253       1 trace.go:201] Trace[1249814706]: \"List etcd3\" key:/rolebindings/provisioning-9859,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Jul-2020 08:18:00.029) (total time: 516ms):\nTrace[1249814706]: [516.568607ms] [516.568607ms] END\nI0707 08:18:40.546329       1 trace.go:201] Trace[713777786]: \"Delete\" url:/apis/rbac.authorization.k8s.io/v1/namespaces/provisioning-9859/rolebindings (07-Jul-2020 08:18:00.029) (total time: 516ms):\nTrace[713777786]: [516.802104ms] [516.802104ms] END\nI0707 08:18:40.547160       1 trace.go:201] Trace[416997122]: \"Delete\" url:/api/v1/namespaces/dns-autoscaling-1701/serviceaccounts (07-Jul-2020 08:18:00.819) (total time: 727ms):\nTrace[416997122]: [727.656023ms] [727.656023ms] END\nI0707 08:18:42.526714       1 trace.go:201] Trace[1167802523]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/security-context-test-9171/events (07-Jul-2020 08:18:00.448) (total time: 1078ms):\nTrace[1167802523]: [1.078081697s] [1.078081697s] END\nI0707 08:18:44.935204       1 trace.go:201] Trace[890283556]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/volume-3617/events (07-Jul-2020 08:18:00.897) (total time: 2037ms):\nTrace[890283556]: [2.037405895s] [2.037405895s] END\nI0707 08:18:47.017299       1 trace.go:201] Trace[543442945]: \"Delete\" url:/api/v1/namespaces/pods-1962/pods/pod-submit-status-1-2,user-agent:kubelet/v1.19.0 (linux/amd64) kubernetes/3615291,client:172.18.0.4 (07-Jul-2020 08:18:00.474) (total time: 537ms):\nTrace[543442945]: ---\"Object deleted from database\" 537ms (08:18:00.011)\nTrace[543442945]: [537.302634ms] [537.302634ms] END\nI0707 08:18:47.895381       1 trace.go:201] Trace[338643840]: \"Delete\" url:/api/v1/namespaces/csi-mock-volumes-5717/events (07-Jul-2020 08:18:00.546) (total time: 2348ms):\nTrace[338643840]: [2.34888537s] [2.34888537s] END\nI0707 08:18:49.073478       1 trace.go:201] Trace[749266909]: \"Delete\" url:/api/v1/namespaces/projected-940/events (07-Jul-2020 08:18:00.520) (total time: 553ms):\nTrace[749266909]: [553.359344ms] [553.359344ms] END\nI0707 08:18:50.810796       1 trace.go:201] Trace[2120403790]: \"Delete\" url:/api/v1/namespaces/persistent-local-volumes-test-9782/events (07-Jul-2020 08:18:00.915) (total time: 895ms):\nTrace[2120403790]: [895.721133ms] [895.721133ms] END\nI0707 08:18:56.903518       1 trace.go:201] Trace[871856065]: \"List etcd3\" key:/resourcequotas/csi-mock-volumes-5717,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Jul-2020 08:18:00.231) (total time: 671ms):\nTrace[871856065]: [671.797677ms] [671.797677ms] END\nI0707 08:18:56.903652       1 trace.go:201] Trace[1962883712]: \"List\" url:/api/v1/namespaces/csi-mock-volumes-5717/resourcequotas,user-agent:kube-controller-manager/v1.19.0 (linux/amd64) kubernetes/3615291/system:serviceaccount:kube-system:namespace-controller,client:172.18.0.3 (07-Jul-2020 08:18:00.231) (total time: 671ms):\nTrace[1962883712]: ---\"Listing from storage done\" 671ms (08:18:00.903)\nTrace[1962883712]: [671.962634ms] [671.962634ms] END\nI0707 08:18:56.910275       1 trace.go:201] Trace[133537243]: \"Delete\" url:/api/v1/namespaces/pods-1962/pods/pod-submit-status-1-4,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container Status should never report success for a pending container,client:172.18.0.1 (07-Jul-2020 08:18:00.206) (total time: 704ms):\nTrace[133537243]: ---\"Object deleted from database\" 703ms (08:18:00.910)\nTrace[133537243]: [704.028219ms] [704.028219ms] END\nI0707 08:18:56.910539       1 trace.go:201] Trace[1317280525]: \"GuaranteedUpdate etcd3\" type:*core.Pod (07-Jul-2020 08:18:00.272) (total time: 637ms):\nTrace[1317280525]: ---\"Transaction committed\" 635ms (08:18:00.910)\nTrace[1317280525]: [637.676125ms] [637.676125ms] END\nI0707 08:18:56.910717       1 trace.go:201] Trace[1374235490]: \"Patch\" url:/api/v1/namespaces/statefulset-4869/pods/ss-2/status,user-agent:kubelet/v1.19.0 (linux/amd64) kubernetes/3615291,client:172.18.0.4 (07-Jul-2020 08:18:00.272) (total time: 637ms):\nTrace[1374235490]: ---\"Object stored in database\" 635ms (08:18:00.910)\nTrace[1374235490]: [637.983481ms] [637.983481ms] END\nI0707 08:18:56.918823       1 trace.go:201] Trace[1489892050]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-1668/persistentvolumeclaims/pvc-spzcx,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist,client:172.18.0.1 (07-Jul-2020 08:18:00.228) (total time: 690ms):\nTrace[1489892050]: ---\"About to write a response\" 690ms (08:18:00.918)\nTrace[1489892050]: [690.511563ms] [690.511563ms] END\nI0707 08:18:56.919740       1 trace.go:201] Trace[616234521]: \"Create\" url:/api/v1/namespaces/pods-1962/events,user-agent:kubelet/v1.19.0 (linux/amd64) kubernetes/3615291,client:172.18.0.4 (07-Jul-2020 08:18:00.370) (total time: 549ms):\nTrace[616234521]: ---\"Object stored in database\" 549ms (08:18:00.919)\nTrace[616234521]: [549.355195ms] [549.355195ms] END\nI0707 08:18:56.920778       1 trace.go:201] Trace[846569105]: \"List etcd3\" key:/namespaces,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Jul-2020 08:18:00.213) (total time: 707ms):\nTrace[846569105]: [707.712977ms] [707.712977ms] END\nI0707 08:18:56.921358       1 trace.go:201] Trace[274984777]: \"List\" url:/api/v1/namespaces,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on,client:172.18.0.1 (07-Jul-2020 08:18:00.213) (total time: 708ms):\nTrace[274984777]: ---\"Listing from storage done\" 707ms (08:18:00.920)\nTrace[274984777]: [708.308241ms] [708.308241ms] END\nI0707 08:18:57.122213       1 trace.go:201] Trace[1627166925]: \"GuaranteedUpdate etcd3\" type:*core.Endpoints (07-Jul-2020 08:18:00.432) (total time: 689ms):\nTrace[1627166925]: ---\"Transaction committed\" 688ms (08:18:00.122)\nTrace[1627166925]: [689.431528ms] [689.431528ms] END\nI0707 08:18:57.122309       1 trace.go:201] Trace[1181717008]: \"Update\" url:/api/v1/namespaces/volumemode-7425/endpoints/csi-hostpath-attacher,user-agent:kube-controller-manager/v1.19.0 (linux/amd64) kubernetes/3615291/system:serviceaccount:kube-system:endpoint-controller,client:172.18.0.3 (07-Jul-2020 08:18:00.432) (total time: 689ms):\nTrace[1181717008]: ---\"Object stored in database\" 689ms (08:18:00.122)\nTrace[1181717008]: [689.706769ms] [689.706769ms] END\nI0707 08:18:57.135384       1 trace.go:201] Trace[100010125]: \"Get\" url:/api/v1/namespaces/volume-3884/endpoints/example.com-nfs-volume-3884,user-agent:nfs-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:172.18.0.2 (07-Jul-2020 08:18:00.478) (total time: 657ms):\nTrace[100010125]: ---\"About to write a response\" 657ms (08:18:00.135)\nTrace[100010125]: [657.223898ms] [657.223898ms] END\nI0707 08:18:57.135707       1 trace.go:201] Trace[305339105]: \"Get\" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.19.0 (linux/amd64) kubernetes/3615291/leader-election,client:172.18.0.3 (07-Jul-2020 08:18:00.482) (total time: 653ms):\nTrace[305339105]: ---\"About to write a response\" 653ms (08:18:00.135)\nTrace[305339105]: [653.179237ms] [653.179237ms] END\nI0707 08:18:57.148228       1 trace.go:201] Trace[295748194]: \"Get\" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.19.0 (linux/amd64) kubernetes/3615291/leader-election,client:172.18.0.3 (07-Jul-2020 08:18:00.516) (total time: 631ms):\nTrace[295748194]: ---\"About to write a response\" 631ms (08:18:00.148)\nTrace[295748194]: [631.741203ms] [631.741203ms] END\nI0707 08:18:57.148342       1 trace.go:201] Trace[1396711747]: \"List etcd3\" key:/networkpolicies/deployment-3513,resourceVersion:,resourceVersionMatch:,limit:0,continue: (07-Jul-2020 08:18:00.278) (total time: 870ms):\nTrace[1396711747]: [870.004624ms] [870.004624ms] END\nI0707 08:18:57.148460       1 trace.go:201] Trace[1397567995]: \"Delete\" url:/apis/networking.k8s.io/v1/namespaces/deployment-3513/networkpolicies (07-Jul-2020 08:18:00.278) (total time: 870ms):\nTrace[1397567995]: [870.366738ms] [870.366738ms] END\nI0707 08:18:57.189258       1 client.go:360] parsed scheme: \"passthrough\"\nI0707 08:18:57.189316       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}\nI0707 08:18:57.189329       1 clientconn.go:948] ClientConn switching balancer to \"pick_first\"\nI0707 08:18:57.750731       1 client.go:360] parsed scheme: \"endpoint\"\nI0707 08:18:57.750785       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]\nI0707 08:18:57.802855       1 client.go:360] parsed scheme: \"endpoint\"\nI0707 08:18:57.802903       1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379  <nil> 0 <nil>}]\nI0707 08:18:57.925948       1 trace.go:201] Trace[1160053346]: \"Delete\" url:/api/v1/namespaces/csi-mock-volumes-9793/events (07-Jul-2020 08:18:00.498) (total time: 2427ms):\nTrace[1160053346]: [2.427514659s] [2.427514659s] END\nI0707 08:19:01.075495       1 trace.go:201] Trace[1521547393]: \"Delete\" url:/api/v1/namespaces/deployment-3513/events (07-Jul-2020 08:18:00.333) (total time: 1741ms):\nTrace[1521547393]: [1.741597799s] [1.741597799s] END\nI0707 08:19:03.284380       1 trace.go:201] Trace[377038637]: \"Call mutating webhook\" configuration:webhook-1997-0,webhook:mutating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:c5494fe0-06c3-4ba4-8357-8930f659c9b2 (07-Jul-2020 08:18:00.273) (total time: 10011ms):\nTrace[377038637]: [10.011217708s] [10.011217708s] END\nW0707 08:19:03.284539       1 dispatcher.go:170] Failed calling webhook, failing open mutating-is-webhook-configuration-ready.k8s.io: failed calling webhook \"mutating-is-webhook-configuration-ready.k8s.io\": Post \"https://e2e-test-webhook.webhook-1997.svc:8443/always-deny?timeout=10s\": context deadline exceeded\nE0707 08:19:03.284614       1 dispatcher.go:171] failed calling webhook \"mutating-is-webhook-configuration-ready.k8s.io\": Post \"https://e2e-test-webhook.webhook-1997.svc:8443/always-deny?timeout=10s\": context deadline exceeded\nI0707 08:19:03.437123       1 trace.go:201] Trace[460683407]: \"Create\" url:/api/v1/namespaces/webhook-1997-markers/configmaps,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance],client:172.18.0.1 (07-Jul-2020 08:18:00.271) (total time: 10165ms):\nTrace[460683407]: [10.165879636s] [10.165879636s] END\nI0707 08:19:04.193448       1 trace.go:201] Trace[1549468535]: \"Delete\" url:/api/v1/namespaces/configmap-4880/events (07-Jul-2020 08:19:00.442) (total time: 750ms):\nTrace[1549468535]: [750.71273ms] [750.71273ms] END\nE0707 08:19:04.463740       1 upgradeaware.go:363] Error proxying data from client to backend: write tcp 172.18.0.3:56954->172.18.0.2:10250: write: broken pipe\nI0707 08:19:04.933136       1 trace.go:201] Trace[753873956]: \"Delete\" url:/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations (07-Jul-2020 08:19:00.015) (total time: 917ms):\nTrace[753873956]: [917.541073ms] [917.541073ms] END\nI0707 08:19:05.603593       1 trace.go:201] Trace[1144525475]: \"Delete\" url:/apis/events.k8s.io/v1/namespaces/csi-mock-volumes-1668/events (07-Jul-2020 08:19:00.622) (total time: 981ms):\nTrace[1144525475]: [981.440659ms] [981.440659ms] END\nI0707 08:19:05.985137       1 trace.go:201] Trace[1259370468]: \"Delete\" url:/api/v1/namespaces/proxy-987/events (07-Jul-2020 08:19:00.141) (total time: 843ms):\nTrace[1259370468]: [843.28874ms] [843.28874ms] END\nI0707 08:19:06.225179       1 trace.go:201] Trace[361302679]: \"Call mutating webhook\" configuration:webhook-6031,webhook:mutating-is-webhook-configuration-ready.k8s.io,resource:/v1, Resource=configmaps,subresource:,operation:CREATE,UID:7a7c8947-d730-4091-95c4-19f30c8bfaed (07-Jul-2020 08:18:00.212) (total time: 10012ms):\nTrace[361302679]: [10.012939489s] [10.012939489s] END\nW0707 08:19:06.226059       1 dispatcher.go:170] Failed calling webhook, failing open mutating-is-webhook-configuration-ready.k8s.io: failed calling webhook \"mutating-is-webhook-configuration-ready.k8s.io\": Post \"https://e2e-test-webhook.webhook-6031.svc:8443/always-deny?timeout=10s\": context deadline exceeded\nE0707 08:19:06.226322       1 dispatcher.go:171] failed calling webhook \"mutating-is-webhook-configuration-ready.k8s.io\": Post \"https://e2e-test-webhook.webhook-6031.svc:8443/always-deny?timeout=10s\": context deadline exceeded\nI0707 08:19:06.387563       1 trace.go:201] Trace[2014792385]: \"Create\" url:/api/v1/namespaces/webhook-6031-markers/configmaps,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance],client:172.18.0.1 (07-Jul-2020 08:18:00.211) (total time: 10175ms):\nTrace[2014792385]: ---\"Object stored in database\" 10175ms (08:19:00.387)\nTrace[2014792385]: [10.175917135s] [10.175917135s] END\nI0707 08:19:07.009031       1 controller.go:606] quota admission added evaluator for: e2e-test-webhook-9483-crds.webhook.example.com\nI0707 08:19:07.759191       1 trace.go:201] Trace[1336254695]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-5731/pods/csi-mockplugin-resizer-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on,client:172.18.0.1 (07-Jul-2020 08:17:00.354) (total time: 79404ms):\nTrace[1336254695]: ---\"About to write a response\" 134ms (08:17:00.488)\nTrace[1336254695]: ---\"Transformed response object\" 79270ms (08:19:00.759)\nTrace[1336254695]: [1m19.404570547s] [1m19.404570547s] END\nI0707 08:19:07.759638       1 trace.go:201] Trace[1384740771]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-5731/pods/csi-mockplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on,client:172.18.0.1 (07-Jul-2020 08:17:00.258) (total time: 90501ms):\nTrace[1384740771]: ---\"About to write a response\" 166ms (08:17:00.425)\nTrace[1384740771]: ---\"Transformed response object\" 90334ms (08:19:00.759)\nTrace[1384740771]: [1m30.501067127s] [1m30.501067127s] END\nI0707 08:19:07.760667       1 trace.go:201] Trace[929417716]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-5731/pods/csi-mockplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on,client:172.18.0.1 (07-Jul-2020 08:17:00.567) (total time: 90193ms):\nTrace[929417716]: ---\"About to write a response\" 213ms (08:17:00.780)\nTrace[929417716]: ---\"Transformed response object\" 89979ms (08:19:00.760)\nTrace[929417716]: [1m30.193250174s] [1m30.193250174s] END\nI0707 08:19:07.760819       1 trace.go:201] Trace[1912670137]: \"Get\" url:/api/v1/namespaces/csi-mock-volumes-5731/pods/csi-mockplugin-0/log,user-agent:e2e.test/v1.19.0 (linux/amd64) kubernetes/3615291 -- [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on,client:172.18.0.1 (07-Jul-2020 08:17:00.434) (total time: 90325ms):\nTrace[1912670137]: ---\"Transformed response object\" 90253ms (08:19:00.760)\nTrace[1912670137]: [1m30.325946092s] [1m30.325946092s] END\n==== END logs for container kube-apiserver of pod kube-system/kube-apiserver-kind-control-plane ====\n==== START logs for container kube-controller-manager of pod kube-system/kube-controller-manager-kind-control-plane ====\nFlag --port has been deprecated, see --secure-port instead.\nI0707 08:05:33.399068       1 serving.go:331] Generated self-signed cert in-memory\nI0707 08:05:33.965115       1 controllermanager.go:175] Version: v1.19.0-beta.2.778+3615291cb3ef45\nI0707 08:05:33.968825       1 secure_serving.go:187] Serving securely on 127.0.0.1:10257\nI0707 08:05:33.968889       1 dynamic_cafile_content.go:167] Starting request-header::/etc/kubernetes/pki/front-proxy-ca.crt\nI0707 08:05:33.969711       1 leaderelection.go:243] attempting to acquire leader lease  kube-system/kube-controller-manager...\nI0707 08:05:33.969846       1 tlsconfig.go:240] Starting DynamicServingCertificateController\nI0707 08:05:33.969865       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt\nE0707 08:05:43.971400       1 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: Get \"https://kind-control-plane:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nE0707 08:05:50.017465       1 leaderelection.go:321] error retrieving resource lock kube-system/kube-controller-manager: endpoints \"kube-controller-manager\" is forbidden: User \"system:kube-controller-manager\" cannot get resource \"endpoints\" in API group \"\" in the namespace \"kube-system\"\nI0707 08:05:52.565040       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager\nI0707 08:05:52.565246       1 event.go:291] \"Event occurred\" object=\"kube-system/kube-controller-manager\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"kind-control-plane_227fc49b-1964-4fa7-8539-9d3834b60067 became leader\"\nI0707 08:05:52.565352       1 event.go:291] \"Event occurred\" object=\"kube-system/kube-controller-manager\" kind=\"Lease\" apiVersion=\"coordination.k8s.io/v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"kind-control-plane_227fc49b-1964-4fa7-8539-9d3834b60067 became leader\"\nI0707 08:05:52.981855       1 shared_informer.go:240] Waiting for caches to sync for tokens\nI0707 08:05:53.082911       1 shared_informer.go:247] Caches are synced for tokens \nI0707 08:05:53.125928       1 controllermanager.go:547] Started \"deployment\"\nI0707 08:05:53.126291       1 deployment_controller.go:153] Starting deployment controller\nI0707 08:05:53.127510       1 shared_informer.go:240] Waiting for caches to sync for deployment\nI0707 08:05:53.227094       1 controllermanager.go:547] Started \"statefulset\"\nI0707 08:05:53.227367       1 stateful_set.go:146] Starting stateful set controller\nI0707 08:05:53.227382       1 shared_informer.go:240] Waiting for caches to sync for stateful set\nI0707 08:05:53.321934       1 controllermanager.go:547] Started \"csrsigning\"\nI0707 08:05:53.323899       1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key\nI0707 08:05:53.324624       1 certificate_controller.go:118] Starting certificate controller \"csrsigning-legacy-unknown\"\nI0707 08:05:53.325417       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-legacy-unknown\nI0707 08:05:53.323710       1 certificate_controller.go:118] Starting certificate controller \"csrsigning-kubelet-serving\"\nI0707 08:05:53.325806       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-serving\nI0707 08:05:53.323731       1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key\nI0707 08:05:53.323751       1 certificate_controller.go:118] Starting certificate controller \"csrsigning-kubelet-client\"\nI0707 08:05:53.323764       1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key\nI0707 08:05:53.323777       1 dynamic_serving_content.go:130] Starting csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key\nI0707 08:05:53.323750       1 certificate_controller.go:118] Starting certificate controller \"csrsigning-kube-apiserver-client\"\nI0707 08:05:53.326191       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client\nI0707 08:05:53.326303       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrsigning-kubelet-client\nI0707 08:05:53.350771       1 controllermanager.go:547] Started \"csrcleaner\"\nI0707 08:05:53.353125       1 cleaner.go:83] Starting CSR cleaner controller\nI0707 08:05:53.418818       1 controllermanager.go:547] Started \"persistentvolume-expander\"\nI0707 08:05:53.421370       1 expand_controller.go:319] Starting expand controller\nI0707 08:05:53.421678       1 shared_informer.go:240] Waiting for caches to sync for expand\nI0707 08:05:53.487448       1 controllermanager.go:547] Started \"pvc-protection\"\nW0707 08:05:53.490149       1 controllermanager.go:539] Skipping \"root-ca-cert-publisher\"\nI0707 08:05:53.490044       1 pvc_protection_controller.go:106] Starting PVC protection controller\nI0707 08:05:53.490879       1 shared_informer.go:240] Waiting for caches to sync for PVC protection\nI0707 08:05:53.578454       1 controllermanager.go:547] Started \"attachdetach\"\nI0707 08:05:53.580320       1 attach_detach_controller.go:322] Starting attach detach controller\nI0707 08:05:53.580404       1 shared_informer.go:240] Waiting for caches to sync for attach detach\nI0707 08:05:53.657473       1 controllermanager.go:547] Started \"replicationcontroller\"\nI0707 08:05:53.658155       1 replica_set.go:182] Starting replicationcontroller controller\nI0707 08:05:53.658175       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController\nI0707 08:05:53.821632       1 controllermanager.go:547] Started \"namespace\"\nI0707 08:05:53.821632       1 namespace_controller.go:200] Starting namespace controller\nI0707 08:05:53.822077       1 shared_informer.go:240] Waiting for caches to sync for namespace\nI0707 08:05:53.946401       1 controllermanager.go:547] Started \"daemonset\"\nI0707 08:05:53.946463       1 daemon_controller.go:285] Starting daemon sets controller\nI0707 08:05:53.946484       1 shared_informer.go:240] Waiting for caches to sync for daemon sets\nI0707 08:05:54.285603       1 controllermanager.go:547] Started \"disruption\"\nI0707 08:05:54.285689       1 disruption.go:331] Starting disruption controller\nI0707 08:05:54.285698       1 shared_informer.go:240] Waiting for caches to sync for disruption\nI0707 08:05:54.540819       1 controllermanager.go:547] Started \"bootstrapsigner\"\nI0707 08:05:54.541617       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer\nI0707 08:05:54.791592       1 node_lifecycle_controller.go:77] Sending events to api server\nE0707 08:05:54.794390       1 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided\nW0707 08:05:54.795219       1 controllermanager.go:539] Skipping \"cloud-node-lifecycle\"\nI0707 08:05:55.050002       1 controllermanager.go:547] Started \"clusterrole-aggregation\"\nI0707 08:05:55.050104       1 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator\nI0707 08:05:55.050115       1 shared_informer.go:240] Waiting for caches to sync for ClusterRoleAggregator\nI0707 08:05:55.310484       1 controllermanager.go:547] Started \"endpoint\"\nI0707 08:05:55.310585       1 endpoints_controller.go:182] Starting endpoint controller\nI0707 08:05:55.310597       1 shared_informer.go:240] Waiting for caches to sync for endpoint\nI0707 08:05:55.541063       1 controllermanager.go:547] Started \"tokencleaner\"\nW0707 08:05:55.541134       1 controllermanager.go:539] Skipping \"ttl-after-finished\"\nI0707 08:05:55.541211       1 tokencleaner.go:118] Starting token cleaner controller\nI0707 08:05:55.541230       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner\nI0707 08:05:55.541239       1 shared_informer.go:247] Caches are synced for token_cleaner \nI0707 08:05:55.685178       1 controllermanager.go:547] Started \"csrapproving\"\nI0707 08:05:55.685251       1 certificate_controller.go:118] Starting certificate controller \"csrapproving\"\nI0707 08:05:55.685653       1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving\nI0707 08:05:55.854293       1 node_ipam_controller.go:91] Sending events to api server.\nI0707 08:06:05.897568       1 range_allocator.go:82] Sending events to api server.\nI0707 08:06:05.929857       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.\nI0707 08:06:05.963323       1 controllermanager.go:547] Started \"nodeipam\"\nI0707 08:06:05.964629       1 node_ipam_controller.go:159] Starting ipam controller\nI0707 08:06:05.966677       1 shared_informer.go:240] Waiting for caches to sync for node\nI0707 08:06:06.007288       1 node_lifecycle_controller.go:380] Sending events to api server.\nI0707 08:06:06.007516       1 taint_manager.go:163] Sending events to api server.\nI0707 08:06:06.007599       1 node_lifecycle_controller.go:508] Controller will reconcile labels.\nI0707 08:06:06.007638       1 controllermanager.go:547] Started \"nodelifecycle\"\nI0707 08:06:06.007874       1 node_lifecycle_controller.go:542] Starting node controller\nI0707 08:06:06.007889       1 shared_informer.go:240] Waiting for caches to sync for taint\nE0707 08:06:06.077822       1 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail\nW0707 08:06:06.078208       1 controllermanager.go:539] Skipping \"service\"\nW0707 08:06:06.079068       1 core.go:243] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.\nW0707 08:06:06.080828       1 controllermanager.go:539] Skipping \"route\"\nI0707 08:06:06.149563       1 controllermanager.go:547] Started \"endpointslice\"\nI0707 08:06:06.151365       1 endpointslice_controller.go:237] Starting endpoint slice controller\nI0707 08:06:06.153309       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice\nI0707 08:06:06.230423       1 controllermanager.go:547] Started \"persistentvolume-binder\"\nI0707 08:06:06.231473       1 pv_controller_base.go:303] Starting persistent volume controller\nI0707 08:06:06.232116       1 shared_informer.go:240] Waiting for caches to sync for persistent volume\nI0707 08:06:06.380845       1 controllermanager.go:547] Started \"podgc\"\nI0707 08:06:06.380885       1 gc_controller.go:89] Starting GC controller\nI0707 08:06:06.381395       1 shared_informer.go:240] Waiting for caches to sync for GC\nI0707 08:06:06.567722       1 controllermanager.go:547] Started \"serviceaccount\"\nI0707 08:06:06.568510       1 serviceaccounts_controller.go:117] Starting service account controller\nI0707 08:06:06.569603       1 shared_informer.go:240] Waiting for caches to sync for service account\nI0707 08:06:06.785759       1 controllermanager.go:547] Started \"garbagecollector\"\nI0707 08:06:06.786280       1 garbagecollector.go:128] Starting garbage collector controller\nI0707 08:06:06.786313       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:06:06.786343       1 graph_builder.go:282] GraphBuilder running\nI0707 08:06:06.927059       1 controllermanager.go:547] Started \"job\"\nI0707 08:06:06.927270       1 job_controller.go:148] Starting job controller\nI0707 08:06:06.927284       1 shared_informer.go:240] Waiting for caches to sync for job\nI0707 08:06:06.989795       1 controllermanager.go:547] Started \"replicaset\"\nI0707 08:06:06.989962       1 replica_set.go:182] Starting replicaset controller\nI0707 08:06:06.989973       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet\nI0707 08:06:07.121057       1 controllermanager.go:547] Started \"cronjob\"\nI0707 08:06:07.121409       1 cronjob_controller.go:96] Starting CronJob Manager\nI0707 08:06:07.273004       1 controllermanager.go:547] Started \"ttl\"\nI0707 08:06:07.273622       1 ttl_controller.go:118] Starting TTL controller\nI0707 08:06:07.274619       1 shared_informer.go:240] Waiting for caches to sync for TTL\nI0707 08:06:07.884328       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io\nI0707 08:06:07.885013       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps\nI0707 08:06:07.885301       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy\nI0707 08:06:07.885914       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io\nI0707 08:06:07.885990       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io\nI0707 08:06:07.886040       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates\nI0707 08:06:07.886077       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints\nI0707 08:06:07.886113       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions\nI0707 08:06:07.886146       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps\nI0707 08:06:07.886179       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch\nI0707 08:06:07.886341       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts\nI0707 08:06:07.886485       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps\nI0707 08:06:07.886523       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch\nI0707 08:06:07.886561       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges\nI0707 08:06:07.886588       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io\nI0707 08:06:07.886613       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io\nI0707 08:06:07.886651       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps\nI0707 08:06:07.886682       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps\nI0707 08:06:07.886707       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling\nI0707 08:06:07.886746       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io\nI0707 08:06:07.886776       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io\nI0707 08:06:07.886815       1 controllermanager.go:547] Started \"resourcequota\"\nI0707 08:06:07.887065       1 resource_quota_controller.go:272] Starting resource quota controller\nI0707 08:06:07.888615       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0707 08:06:07.889290       1 resource_quota_monitor.go:303] QuotaMonitor running\nI0707 08:06:08.222454       1 controllermanager.go:547] Started \"horizontalpodautoscaling\"\nI0707 08:06:08.222833       1 horizontal.go:169] Starting HPA controller\nI0707 08:06:08.223105       1 shared_informer.go:240] Waiting for caches to sync for HPA\nI0707 08:06:08.471760       1 controllermanager.go:547] Started \"pv-protection\"\nI0707 08:06:08.472832       1 pv_protection_controller.go:83] Starting PV protection controller\nI0707 08:06:08.473860       1 shared_informer.go:240] Waiting for caches to sync for PV protection\nW0707 08:06:08.585299       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kind-control-plane\" does not exist\nI0707 08:06:08.634718       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown \nI0707 08:06:08.641911       1 shared_informer.go:247] Caches are synced for certificate-csrapproving \nI0707 08:06:08.634667       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client \nI0707 08:06:08.649720       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client \nI0707 08:06:08.650484       1 shared_informer.go:247] Caches are synced for namespace \nI0707 08:06:08.650646       1 shared_informer.go:247] Caches are synced for expand \nI0707 08:06:08.657148       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving \nI0707 08:06:08.657918       1 shared_informer.go:247] Caches are synced for deployment \nI0707 08:06:08.668810       1 shared_informer.go:247] Caches are synced for node \nI0707 08:06:08.669801       1 range_allocator.go:172] Starting range CIDR allocator\nI0707 08:06:08.669916       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator\nI0707 08:06:08.670007       1 shared_informer.go:247] Caches are synced for cidrallocator \nI0707 08:06:08.679302       1 shared_informer.go:247] Caches are synced for service account \nI0707 08:06:08.658154       1 shared_informer.go:247] Caches are synced for bootstrap_signer \nI0707 08:06:08.660822       1 shared_informer.go:247] Caches are synced for daemon sets \nI0707 08:06:08.660836       1 shared_informer.go:247] Caches are synced for stateful set \nI0707 08:06:08.660844       1 shared_informer.go:247] Caches are synced for persistent volume \nI0707 08:06:08.661828       1 shared_informer.go:247] Caches are synced for endpoint_slice \nI0707 08:06:08.682833       1 shared_informer.go:247] Caches are synced for ReplicationController \nI0707 08:06:08.682857       1 shared_informer.go:247] Caches are synced for PV protection \nI0707 08:06:08.683297       1 shared_informer.go:247] Caches are synced for TTL \nI0707 08:06:08.687321       1 shared_informer.go:247] Caches are synced for disruption \nI0707 08:06:08.687354       1 disruption.go:339] Sending events to api server.\nI0707 08:06:08.689132       1 shared_informer.go:247] Caches are synced for GC \nI0707 08:06:08.690827       1 shared_informer.go:247] Caches are synced for ReplicaSet \nI0707 08:06:08.694710       1 shared_informer.go:247] Caches are synced for PVC protection \nI0707 08:06:08.720066       1 shared_informer.go:247] Caches are synced for taint \nI0707 08:06:08.720517       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: \nW0707 08:06:08.720858       1 node_lifecycle_controller.go:1044] Missing timestamp for Node kind-control-plane. Assuming now as a timestamp.\nI0707 08:06:08.721105       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.\nI0707 08:06:08.721670       1 taint_manager.go:187] Starting NoExecuteTaintManager\nI0707 08:06:08.723306       1 event.go:291] \"Event occurred\" object=\"kind-control-plane\" kind=\"Node\" apiVersion=\"v1\" type=\"Normal\" reason=\"RegisteredNode\" message=\"Node kind-control-plane event: Registered Node kind-control-plane in Controller\"\nI0707 08:06:08.727878       1 shared_informer.go:247] Caches are synced for job \nI0707 08:06:08.732079       1 shared_informer.go:247] Caches are synced for HPA \nI0707 08:06:08.744156       1 range_allocator.go:373] Set node kind-control-plane PodCIDR to [10.244.0.0/24]\nI0707 08:06:08.751103       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator \nI0707 08:06:08.784698       1 shared_informer.go:247] Caches are synced for attach detach \nI0707 08:06:08.790443       1 shared_informer.go:247] Caches are synced for resource quota \nI0707 08:06:08.810988       1 event.go:291] \"Event occurred\" object=\"kube-system/coredns\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set coredns-56d88949c8 to 2\"\nI0707 08:06:08.822660       1 shared_informer.go:247] Caches are synced for endpoint \nI0707 08:06:08.955512       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:06:08.978793       1 event.go:291] \"Event occurred\" object=\"kube-system/coredns-56d88949c8\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: coredns-56d88949c8-vqfs6\"\nI0707 08:06:08.978842       1 event.go:291] \"Event occurred\" object=\"kube-system/etcd-kind-control-plane\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"NodeNotReady\" message=\"Node is not ready\"\nI0707 08:06:08.978854       1 event.go:291] \"Event occurred\" object=\"kube-system/kube-apiserver-kind-control-plane\" kind=\"Pod\" apiVersion=\"v1\" type=\"Warning\" reason=\"NodeNotReady\" message=\"Node is not ready\"\nI0707 08:06:09.024055       1 request.go:645] Throttling request took 1.042956952s, request: GET:https://kind-control-plane:6443/apis/networking.k8s.io/v1beta1?timeout=32s\nI0707 08:06:09.112906       1 event.go:291] \"Event occurred\" object=\"kube-system/coredns-56d88949c8\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: coredns-56d88949c8-88rm4\"\nI0707 08:06:09.113366       1 event.go:291] \"Event occurred\" object=\"kube-system/kube-proxy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-proxy-ls6w9\"\nI0707 08:06:09.156882       1 shared_informer.go:247] Caches are synced for garbage collector \nI0707 08:06:09.186973       1 shared_informer.go:247] Caches are synced for garbage collector \nI0707 08:06:09.187760       1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage\nE0707 08:06:09.330087       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io \"admin\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:06:09.735386       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0707 08:06:09.735697       1 shared_informer.go:247] Caches are synced for resource quota \nI0707 08:06:10.937468       1 event.go:291] \"Event occurred\" object=\"kube-system/kindnet\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kindnet-gtccg\"\nE0707 08:06:10.985852       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\", UID:\"0d96d350-0c81-4b8c-a46a-39ccc4823850\", ResourceVersion:\"395\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729705969, loc:(*time.Location)(0x6e802e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubectl-create\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc001f1cf80), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001f1cfa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001f1cfc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001f1cfe0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001f1d000), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001f1d020), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"kindest/kindnetd:v20200619-15f5b3ab\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001f1d040)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001f1d080)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc00197fd40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc00175a708), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000636ee0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0012ac350)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00175a750)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:06:20.068247       1 event.go:291] \"Event occurred\" object=\"local-path-storage/local-path-provisioner\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set local-path-provisioner-7f55f649f7 to 1\"\nI0707 08:06:20.093186       1 event.go:291] \"Event occurred\" object=\"local-path-storage/local-path-provisioner-7f55f649f7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: local-path-provisioner-7f55f649f7-7h56n\"\nI0707 08:06:33.727174       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.\nW0707 08:06:38.081417       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kind-worker\" does not exist\nI0707 08:06:38.158386       1 range_allocator.go:373] Set node kind-worker PodCIDR to [10.244.1.0/24]\nI0707 08:06:38.160538       1 event.go:291] \"Event occurred\" object=\"kube-system/kindnet\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kindnet-k22tn\"\nI0707 08:06:38.161284       1 event.go:291] \"Event occurred\" object=\"kube-system/kube-proxy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-proxy-mzbx5\"\nE0707 08:06:38.232950       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\", UID:\"0d96d350-0c81-4b8c-a46a-39ccc4823850\", ResourceVersion:\"479\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729705969, loc:(*time.Location)(0x6e802e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubectl-create\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc00074c320), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00074c360)}, v1.ManagedFieldsEntry{Manager:\"kube-controller-manager\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc00074c380), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc00074c3c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00074c3e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00074c420), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00074c440), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00074c460), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"kindest/kindnetd:v20200619-15f5b3ab\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00074c4c0)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc00074c500)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc0014c6a20), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc000db2c48), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000f82a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00136a6e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc000db2cc0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:1, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:1, NumberAvailable:1, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again\nW0707 08:06:38.728931       1 node_lifecycle_controller.go:1044] Missing timestamp for Node kind-worker. Assuming now as a timestamp.\nI0707 08:06:38.729032       1 event.go:291] \"Event occurred\" object=\"kind-worker\" kind=\"Node\" apiVersion=\"v1\" type=\"Normal\" reason=\"RegisteredNode\" message=\"Node kind-worker event: Registered Node kind-worker in Controller\"\nW0707 08:06:38.840211       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=\"kind-worker2\" does not exist\nI0707 08:06:38.874824       1 event.go:291] \"Event occurred\" object=\"kube-system/kube-proxy\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kube-proxy-xnvt9\"\nI0707 08:06:38.881458       1 event.go:291] \"Event occurred\" object=\"kube-system/kindnet\" kind=\"DaemonSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: kindnet-2lvkb\"\nI0707 08:06:38.990392       1 range_allocator.go:373] Set node kind-worker2 PodCIDR to [10.244.2.0/24]\nE0707 08:06:39.067423       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet\", UID:\"0d96d350-0c81-4b8c-a46a-39ccc4823850\", ResourceVersion:\"544\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729705969, loc:(*time.Location)(0x6e802e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubectl-create\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc001bd6b60), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001bd6b80)}, v1.ManagedFieldsEntry{Manager:\"kube-controller-manager\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc001bd6ba0), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001bd6bc0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001bd6be0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bd6c00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bd6c20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001bd6c40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"kindest/kindnetd:v20200619-15f5b3ab\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001bd6c60)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001bd6ca0)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001b98e40), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc0017fb378), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001879d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e570)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0017fb3c0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again\nE0707 08:06:39.077480       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kube-proxy\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy\", UID:\"1723cf4f-52dd-4326-ad1a-dfc50db05800\", ResourceVersion:\"547\", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729705955, loc:(*time.Location)(0x6e802e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kubeadm\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc001f3f620), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001f3f640)}, v1.ManagedFieldsEntry{Manager:\"kube-controller-manager\", Operation:\"Update\", APIVersion:\"apps/v1\", Time:(*v1.Time)(0xc001f3f660), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001f3f680)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001f3f6a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"k8s-app\":\"kube-proxy\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"kube-proxy\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001ea3900), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001f3f6c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001f3f6e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kube-proxy\", Image:\"k8s.gcr.io/kube-proxy:v1.19.0-beta.2.778_3615291cb3ef45\", Command:[]string{\"/usr/local/bin/kube-proxy\", \"--config=/var/lib/kube-proxy/config.conf\", \"--hostname-override=$(NODE_NAME)\"}, Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"NODE_NAME\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc001f3f720)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"kube-proxy\", ReadOnly:false, MountPath:\"/var/lib/kube-proxy\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001f2b3e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001f4c648), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string{\"kubernetes.io/os\":\"linux\"}, ServiceAccountName:\"kube-proxy\", DeprecatedServiceAccount:\"kube-proxy\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00002be30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"CriticalAddonsOnly\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"system-node-critical\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0013266f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001f4c698)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:2, NumberMisscheduled:0, DesiredNumberScheduled:2, NumberReady:1, ObservedGeneration:1, UpdatedNumberScheduled:2, NumberAvailable:1, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kube-proxy\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:06:43.730582       1 event.go:291] \"Event occurred\" object=\"kind-worker2\" kind=\"Node\" apiVersion=\"v1\" type=\"Normal\" reason=\"RegisteredNode\" message=\"Node kind-worker2 event: Registered Node kind-worker2 in Controller\"\nW0707 08:06:43.730760       1 node_lifecycle_controller.go:1044] Missing timestamp for Node kind-worker2. Assuming now as a timestamp.\nI0707 08:07:31.770210       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0707 08:07:31.770754       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0707 08:07:31.894080       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0707 08:07:32.003348       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:07:32.787405       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/svc-latency-rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: svc-latency-rc-gm52b\"\nI0707 08:07:33.071408       1 event.go:291] \"Event occurred\" object=\"services-5665/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-qrvqr\"\nI0707 08:07:33.215430       1 event.go:291] \"Event occurred\" object=\"services-5665/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-4qkpw\"\nI0707 08:07:33.341805       1 event.go:291] \"Event occurred\" object=\"webhook-6303/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-bcc959585 to 1\"\nI0707 08:07:33.531281       1 event.go:291] \"Event occurred\" object=\"webhook-6303/sample-webhook-deployment-bcc959585\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-bcc959585-6lvnt\"\nI0707 08:07:34.856591       1 event.go:291] \"Event occurred\" object=\"job-4484/adopt-release\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: adopt-release-7xql9\"\nI0707 08:07:34.888387       1 event.go:291] \"Event occurred\" object=\"job-4484/adopt-release\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: adopt-release-c5lwt\"\nI0707 08:07:35.372604       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 6\"\nI0707 08:07:35.681883       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-g4wpv\"\nI0707 08:07:35.739923       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-j49g2\"\nI0707 08:07:35.740483       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-zhmtd\"\nI0707 08:07:35.740824       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 7\"\nI0707 08:07:35.800724       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-bgjgv\"\nI0707 08:07:35.802786       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-d8mwl\"\nI0707 08:07:35.804550       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-wfsf2\"\nI0707 08:07:35.997188       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7746d44bfb to 2\"\nI0707 08:07:36.063463       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-7746d44bfb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7746d44bfb-hq52m\"\nI0707 08:07:36.178460       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-7746d44bfb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7746d44bfb-6rx8g\"\nI0707 08:07:36.289191       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-28qvd\"\nI0707 08:07:36.457438       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 1\"\nE0707 08:07:36.585082       1 replica_set.go:532] sync \"deployment-1755/webserver-dd94f59b7\" failed with Operation cannot be fulfilled on replicasets.apps \"webserver-dd94f59b7\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:07:36.790915       1 event.go:291] \"Event occurred\" object=\"provisioning-3191/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0707 08:07:37.149711       1 event.go:291] \"Event occurred\" object=\"provisioning-3191/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:07:37.784017       1 event.go:291] \"Event occurred\" object=\"provisioning-3191/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:07:38.306156       1 event.go:291] \"Event occurred\" object=\"provisioning-3191/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0707 08:07:38.717390       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:07:39.499110       1 event.go:291] \"Event occurred\" object=\"provisioning-3191/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nE0707 08:07:40.127557       1 tokens_controller.go:261] error synchronizing serviceaccount gcp-volume-1918/default: secrets \"default-token-xpw6d\" is forbidden: unable to create new content in namespace gcp-volume-1918 because it is being terminated\nI0707 08:07:40.480542       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nE0707 08:07:40.558569       1 tokens_controller.go:261] error synchronizing serviceaccount pods-7224/default: secrets \"default-token-z88xg\" is forbidden: unable to create new content in namespace pods-7224 because it is being terminated\nI0707 08:07:40.747617       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:07:40.817041       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 2\"\nI0707 08:07:40.850513       1 shared_informer.go:247] Caches are synced for garbage collector \nW0707 08:07:40.879654       1 shared_informer.go:494] resyncPeriod 44603399500812 is smaller than resyncCheckPeriod 56924284016793 and the informer has already started. Changing it to 56924284016793\nI0707 08:07:40.880059       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-publish-openapi-3335-crds.crd-publish-openapi-test-unknown-at-root.example.com\nI0707 08:07:40.881403       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0707 08:07:40.881670       1 shared_informer.go:247] Caches are synced for resource quota \nI0707 08:07:40.985855       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 6\"\nI0707 08:07:41.298473       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-7746d44bfb to 3\"\nI0707 08:07:41.300185       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-j49g2\"\nI0707 08:07:41.317981       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-7746d44bfb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7746d44bfb-dtfmt\"\nE0707 08:07:41.573541       1 tokens_controller.go:261] error synchronizing serviceaccount ingressclass-8087/default: secrets \"default-token-mngnv\" is forbidden: unable to create new content in namespace ingressclass-8087 because it is being terminated\nI0707 08:07:41.697567       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-7746d44bfb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7746d44bfb-cmch7\"\nI0707 08:07:42.156587       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-7746d44bfb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-7746d44bfb-phpdv\"\nI0707 08:07:42.162664       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-7jmvq\"\nI0707 08:07:42.465755       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-qk5h6\"\nE0707 08:07:42.553762       1 tokens_controller.go:261] error synchronizing serviceaccount kubelet-test-4666/default: secrets \"default-token-klcc2\" is forbidden: unable to create new content in namespace kubelet-test-4666 because it is being terminated\nI0707 08:07:42.751589       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-7lm6g\"\nI0707 08:07:45.585453       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-7746d44bfb to 0\"\nI0707 08:07:45.638711       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6bfd98ccf6 to 3\"\nI0707 08:07:45.652529       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6bfd98ccf6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6bfd98ccf6-2kx22\"\nI0707 08:07:45.720103       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6bfd98ccf6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6bfd98ccf6-xjd24\"\nI0707 08:07:45.721621       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6bfd98ccf6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6bfd98ccf6-pjk2z\"\nI0707 08:07:45.741269       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-7746d44bfb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7746d44bfb-phpdv\"\nI0707 08:07:45.786085       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-7746d44bfb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7746d44bfb-6rx8g\"\nI0707 08:07:45.788617       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-7746d44bfb\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-7746d44bfb-cmch7\"\nI0707 08:07:47.329931       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-dd94f59b7 to 7\"\nI0707 08:07:47.373050       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-dd94f59b7-r7ctx\"\nI0707 08:07:47.958471       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 6\"\nI0707 08:07:48.073556       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6bfd98ccf6 to 4\"\nI0707 08:07:48.123959       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6bfd98ccf6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6bfd98ccf6-2v8jw\"\nI0707 08:07:48.229183       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-r7ctx\"\nE0707 08:07:48.302573       1 tokens_controller.go:261] error synchronizing serviceaccount server-version-7843/default: secrets \"default-token-vsnn7\" is forbidden: unable to create new content in namespace server-version-7843 because it is being terminated\nI0707 08:07:48.766010       1 namespace_controller.go:185] Namespace has been deleted secrets-7042\nI0707 08:07:48.835975       1 namespace_controller.go:185] Namespace has been deleted volumemode-7872\nI0707 08:07:48.927684       1 namespace_controller.go:185] Namespace has been deleted gcp-volume-1918\nI0707 08:07:48.985698       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-4666\nI0707 08:07:49.018079       1 namespace_controller.go:185] Namespace has been deleted ingressclass-8087\nE0707 08:07:49.673724       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:07:50.612647       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 5\"\nE0707 08:07:50.699346       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:07:50.895495       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-qk5h6\"\nI0707 08:07:51.219773       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 2\"\nI0707 08:07:51.603675       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-595c898897 to 3\"\nI0707 08:07:51.877643       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-d4wlc\"\nI0707 08:07:52.101574       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-g4wpv\"\nI0707 08:07:52.101981       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-7jmvq\"\nI0707 08:07:52.102188       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-q6ccs\"\nI0707 08:07:52.102392       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-nf9pd\"\nI0707 08:07:52.124603       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-7lm6g\"\nI0707 08:07:52.278267       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6bfd98ccf6 to 3\"\nE0707 08:07:52.445273       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:07:52.568084       1 replica_set.go:532] sync \"deployment-1755/webserver-dd94f59b7\" failed with Operation cannot be fulfilled on replicasets.apps \"webserver-dd94f59b7\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:07:52.634131       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6bfd98ccf6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6bfd98ccf6-2v8jw\"\nI0707 08:07:53.100386       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-dd94f59b7 to 0\"\nI0707 08:07:53.190840       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6bfd98ccf6 to 2\"\nI0707 08:07:53.440901       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-wfsf2\"\nI0707 08:07:53.449963       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-6db5448c8b to 3\"\nI0707 08:07:53.450844       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-dd94f59b7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-dd94f59b7-d8mwl\"\nI0707 08:07:53.499681       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6db5448c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6db5448c8b-vxzgb\"\nI0707 08:07:53.567115       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6db5448c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6db5448c8b-t54g8\"\nI0707 08:07:53.575565       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6db5448c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6db5448c8b-jmz95\"\nI0707 08:07:53.635600       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6bfd98ccf6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6bfd98ccf6-2kx22\"\nI0707 08:07:53.733740       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nE0707 08:07:55.088744       1 namespace_controller.go:162] deletion of namespace pods-7224 failed: unable to retrieve the complete list of server APIs: crd-publish-openapi-test-unknown-at-root.example.com/v1: the server could not find the requested resource\nE0707 08:07:55.709356       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:07:55.934954       1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-7365/test-quota\nI0707 08:07:57.216202       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-dxr5x\"\nI0707 08:07:57.360723       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-ng7bq\"\nI0707 08:07:57.665235       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6bfd98ccf6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6bfd98ccf6-g9dmv\"\nI0707 08:07:57.843658       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6db5448c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6db5448c8b-vkwps\"\nI0707 08:07:58.109063       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6db5448c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-6db5448c8b-5w9zw\"\nI0707 08:07:58.192680       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 6\"\nI0707 08:07:59.389377       1 namespace_controller.go:185] Namespace has been deleted server-version-7843\nE0707 08:08:01.030799       1 tokens_controller.go:261] error synchronizing serviceaccount crd-publish-openapi-7308/default: secrets \"default-token-6nbk5\" is forbidden: unable to create new content in namespace crd-publish-openapi-7308 because it is being terminated\nI0707 08:08:01.666776       1 namespace_controller.go:185] Namespace has been deleted resourcequota-7365\nI0707 08:08:02.870359       1 namespace_controller.go:185] Namespace has been deleted pods-7224\nI0707 08:08:03.230139       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-595c898897 to 2\"\nI0707 08:08:03.390331       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-595c898897-d4wlc\"\nI0707 08:08:03.502227       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6bfd98ccf6 to 0\"\nI0707 08:08:03.640279       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6bfd98ccf6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6bfd98ccf6-xjd24\"\nI0707 08:08:03.641722       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6bfd98ccf6\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6bfd98ccf6-g9dmv\"\nI0707 08:08:03.641753       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-9bd49b56c to 2\"\nI0707 08:08:03.686093       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-9bd49b56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-9bd49b56c-4wmqz\"\nI0707 08:08:03.765999       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-595c898897 to 1\"\nI0707 08:08:03.857867       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-9bd49b56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-9bd49b56c-4gm4s\"\nE0707 08:08:03.858383       1 replica_set.go:532] sync \"deployment-1755/webserver-595c898897\" failed with Operation cannot be fulfilled on replicasets.apps \"webserver-595c898897\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:08:03.945693       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-9bd49b56c to 3\"\nI0707 08:08:04.049758       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-9bd49b56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-9bd49b56c-f574t\"\nI0707 08:08:04.050168       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-595c898897-dxr5x\"\nI0707 08:08:06.095681       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 8\"\nI0707 08:08:06.255994       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6db5448c8b to 1\"\nI0707 08:08:06.317891       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-595c898897 to 3\"\nI0707 08:08:06.348716       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6db5448c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6db5448c8b-vkwps\"\nI0707 08:08:06.382635       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6db5448c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6db5448c8b-5w9zw\"\nI0707 08:08:06.423725       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-9mg54\"\nI0707 08:08:06.443683       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-5h92l\"\nI0707 08:08:07.233841       1 namespace_controller.go:185] Namespace has been deleted crd-publish-openapi-7308\nI0707 08:08:07.496921       1 namespace_controller.go:185] Namespace has been deleted volume-8096\nE0707 08:08:07.504572       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:08:08.734378       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:08:08.864668       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job failed-jobs-history-limit-1594109280\"\nI0707 08:08:08.936910       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit-1594109280\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: failed-jobs-history-limit-1594109280-cjt5g\"\nI0707 08:08:08.953825       1 cronjob_controller.go:190] Unable to update status for cronjob-2080/failed-jobs-history-limit (rv = 828): Operation cannot be fulfilled on cronjobs.batch \"failed-jobs-history-limit\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:08:11.208418       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:08:11.209404       1 shared_informer.go:247] Caches are synced for garbage collector \nI0707 08:08:11.244894       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0707 08:08:11.245344       1 shared_informer.go:247] Caches are synced for resource quota \nI0707 08:08:12.086082       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-595c898897 to 2\"\nI0707 08:08:12.154967       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-9bd49b56c to 2\"\nI0707 08:08:12.178221       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-595c898897-9mg54\"\nE0707 08:08:12.182554       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-7122/pvc-8m2w6: storageclass.storage.k8s.io \"volume-7122\" not found\nI0707 08:08:12.183145       1 event.go:291] \"Event occurred\" object=\"volume-7122/pvc-8m2w6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-7122\\\" not found\"\nI0707 08:08:12.263340       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 9\"\nI0707 08:08:12.268359       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-9bd49b56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-9bd49b56c-4wmqz\"\nI0707 08:08:13.702857       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-9x4nm\"\nI0707 08:08:13.755561       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-wrr45\"\nI0707 08:08:13.764685       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-pfl6r\"\nI0707 08:08:13.834677       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-xt4vf\"\nI0707 08:08:13.835108       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-7459s\"\nI0707 08:08:13.836731       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-6682t\"\nI0707 08:08:13.837258       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-9vlqq\"\nI0707 08:08:13.984963       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-8nq5b\"\nI0707 08:08:13.985307       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-xrkd2\"\nI0707 08:08:13.985923       1 event.go:291] \"Event occurred\" object=\"gc-4752/simpletest-rc-to-be-deleted\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest-rc-to-be-deleted-scvbx\"\nI0707 08:08:16.760634       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-27djc\"\nI0707 08:08:17.634918       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-hfcnw\"\nE0707 08:08:17.932299       1 pv_controller.go:1432] error finding provisioning plugin for claim volumemode-8201/pvc-cf75v: storageclass.storage.k8s.io \"volumemode-8201\" not found\nI0707 08:08:17.932959       1 event.go:291] \"Event occurred\" object=\"volumemode-8201/pvc-cf75v\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-8201\\\" not found\"\nE0707 08:08:18.931765       1 tokens_controller.go:261] error synchronizing serviceaccount secrets-3050/default: secrets \"default-token-8hxjj\" is forbidden: unable to create new content in namespace secrets-3050 because it is being terminated\nE0707 08:08:20.718562       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:08:22.262944       1 tokens_controller.go:261] error synchronizing serviceaccount secrets-7484/default: secrets \"default-token-8xjsz\" is forbidden: unable to create new content in namespace secrets-7484 because it is being terminated\nI0707 08:08:23.737434       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nE0707 08:08:24.270681       1 tokens_controller.go:261] error synchronizing serviceaccount configmap-4886/default: secrets \"default-token-ljftw\" is forbidden: unable to create new content in namespace configmap-4886 because it is being terminated\nE0707 08:08:24.272216       1 tokens_controller.go:261] error synchronizing serviceaccount services-9164/default: serviceaccounts \"default\" not found\nI0707 08:08:24.512516       1 event.go:291] \"Event occurred\" object=\"job-4484/adopt-release\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: adopt-release-zgf4n\"\nE0707 08:08:24.522275       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"adopt-release.161f69f76c960204\", GenerateName:\"\", Namespace:\"job-4484\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Job\", Namespace:\"job-4484\", Name:\"adopt-release\", UID:\"74b44302-da10-4d0f-a7fa-b67c78a4f78e\", APIVersion:\"batch/v1\", ResourceVersion:\"1111\", FieldPath:\"\"}, Reason:\"SuccessfulCreate\", Message:\"Created pod: adopt-release-zgf4n\", Source:v1.EventSource{Component:\"job-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb9293e1e855204, ext:172999145575, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb9293e1e855204, ext:172999145575, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"adopt-release.161f69f76c960204\" is forbidden: unable to create new content in namespace job-4484 because it is being terminated' (will not retry!)\nE0707 08:08:25.007698       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-7679/pvc-ml2gr: storageclass.storage.k8s.io \"provisioning-7679\" not found\nI0707 08:08:25.008072       1 event.go:291] \"Event occurred\" object=\"provisioning-7679/pvc-ml2gr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7679\\\" not found\"\nI0707 08:08:25.133923       1 namespace_controller.go:185] Namespace has been deleted limitrange-3985\nE0707 08:08:25.441523       1 tokens_controller.go:261] error synchronizing serviceaccount projected-1877/default: secrets \"default-token-6z6vr\" is forbidden: unable to create new content in namespace projected-1877 because it is being terminated\nE0707 08:08:26.189809       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-7343/pvc-4c4pf: storageclass.storage.k8s.io \"provisioning-7343\" not found\nI0707 08:08:26.189968       1 event.go:291] \"Event occurred\" object=\"provisioning-7343/pvc-4c4pf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-7343\\\" not found\"\nI0707 08:08:26.681082       1 namespace_controller.go:185] Namespace has been deleted secrets-3050\nI0707 08:08:27.110037       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9874/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0707 08:08:27.166430       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9874/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0707 08:08:27.543031       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:08:28.917865       1 namespace_controller.go:185] Namespace has been deleted secrets-7484\nI0707 08:08:28.941905       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit-1594109280\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI0707 08:08:28.941944       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit-1594109280\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: failed-jobs-history-limit-1594109280-cjt5g\"\nI0707 08:08:28.971272       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-9bd49b56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-9bd49b56c-zd5rk\"\nE0707 08:08:28.973398       1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-1742/default: secrets \"default-token-q4ssm\" is forbidden: unable to create new content in namespace downward-api-1742 because it is being terminated\nI0707 08:08:29.368244       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: failed-jobs-history-limit-1594109280, status: Failed\"\nI0707 08:08:30.807611       1 event.go:291] \"Event occurred\" object=\"services-6443/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-d45nq\"\nE0707 08:08:30.898881       1 tokens_controller.go:261] error synchronizing serviceaccount job-4484/default: secrets \"default-token-lsght\" is forbidden: unable to create new content in namespace job-4484 because it is being terminated\nI0707 08:08:30.913397       1 event.go:291] \"Event occurred\" object=\"services-6443/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-jhfpk\"\nI0707 08:08:30.913806       1 event.go:291] \"Event occurred\" object=\"services-6443/affinity-clusterip-timeout\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-timeout-r78ln\"\nI0707 08:08:32.745765       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"DeploymentRollback\" message=\"Rolled back deployment \\\"webserver\\\" to revision 10\"\nI0707 08:08:33.308960       1 namespace_controller.go:185] Namespace has been deleted configmap-4886\nI0707 08:08:33.823446       1 namespace_controller.go:185] Namespace has been deleted services-9164\nI0707 08:08:34.278355       1 namespace_controller.go:185] Namespace has been deleted projected-1877\nI0707 08:08:34.651297       1 namespace_controller.go:185] Namespace has been deleted downward-api-1742\nW0707 08:08:37.817847       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-4t2f9\", retrying. Error: Error updating latency-svc-4t2f9-8swks EndpointSlice for Service svc-latency-5404/latency-svc-4t2f9: endpointslices.discovery.k8s.io \"latency-svc-4t2f9-8swks\" not found\nI0707 08:08:37.818398       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-4t2f9\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-4t2f9: Error updating latency-svc-4t2f9-8swks EndpointSlice for Service svc-latency-5404/latency-svc-4t2f9: endpointslices.discovery.k8s.io \\\"latency-svc-4t2f9-8swks\\\" not found\"\nE0707 08:08:38.053850       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-4t2f9.161f69fa85abf4a7\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-4t2f9\", UID:\"fd6c86b0-b67d-4494-8ca3-72f2dd171fd6\", APIVersion:\"v1\", ResourceVersion:\"2781\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-4t2f9: Error updating latency-svc-4t2f9-8swks EndpointSlice for Service svc-latency-5404/latency-svc-4t2f9: endpointslices.discovery.k8s.io \\\"latency-svc-4t2f9-8swks\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb9294170bf02a7, ext:186304916238, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb9294170bf02a7, ext:186304916238, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-4t2f9.161f69fa85abf4a7\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nI0707 08:08:38.756168       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:08:39.417092       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nW0707 08:08:40.442446       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-94vqh\", retrying. Error: Error updating latency-svc-94vqh-f6s9f EndpointSlice for Service svc-latency-5404/latency-svc-94vqh: endpointslices.discovery.k8s.io \"latency-svc-94vqh-f6s9f\" not found\nI0707 08:08:40.442916       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-94vqh\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-94vqh: Error updating latency-svc-94vqh-f6s9f EndpointSlice for Service svc-latency-5404/latency-svc-94vqh: endpointslices.discovery.k8s.io \\\"latency-svc-94vqh-f6s9f\\\" not found\"\nE0707 08:08:40.720156       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-94vqh.161f69fb221c3fe8\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-94vqh\", UID:\"0d0199a4-e592-4c92-a7b3-6624fdd2df1b\", APIVersion:\"v1\", ResourceVersion:\"2191\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-94vqh: Error updating latency-svc-94vqh-f6s9f EndpointSlice for Service svc-latency-5404/latency-svc-94vqh: endpointslices.discovery.k8s.io \\\"latency-svc-94vqh-f6s9f\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb929421a5eefe8, ext:188929521225, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb929421a5eefe8, ext:188929521225, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-94vqh.161f69fb221c3fe8\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nW0707 08:08:41.656699       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-bndtm\", retrying. Error: Error updating latency-svc-bndtm-wvzbw EndpointSlice for Service svc-latency-5404/latency-svc-bndtm: endpointslices.discovery.k8s.io \"latency-svc-bndtm-wvzbw\" not found\nI0707 08:08:41.656867       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-bndtm\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-bndtm: Error updating latency-svc-bndtm-wvzbw EndpointSlice for Service svc-latency-5404/latency-svc-bndtm: endpointslices.discovery.k8s.io \\\"latency-svc-bndtm-wvzbw\\\" not found\"\nE0707 08:08:41.849850       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-bndtm.161f69fb6a7c396b\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-bndtm\", UID:\"0b61b397-fb1f-46d7-ac27-f6a855d3fb02\", APIVersion:\"v1\", ResourceVersion:\"2019\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-bndtm: Error updating latency-svc-bndtm-wvzbw EndpointSlice for Service svc-latency-5404/latency-svc-bndtm: endpointslices.discovery.k8s.io \\\"latency-svc-bndtm-wvzbw\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb9294267241f6b, ext:190143770586, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb9294267241f6b, ext:190143770586, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-bndtm.161f69fb6a7c396b\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nW0707 08:08:42.181408       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-csq5d\", retrying. Error: Error updating latency-svc-csq5d-nbngz EndpointSlice for Service svc-latency-5404/latency-svc-csq5d: endpointslices.discovery.k8s.io \"latency-svc-csq5d-nbngz\" not found\nI0707 08:08:42.182088       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-csq5d\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-csq5d: Error updating latency-svc-csq5d-nbngz EndpointSlice for Service svc-latency-5404/latency-svc-csq5d: endpointslices.discovery.k8s.io \\\"latency-svc-csq5d-nbngz\\\" not found\"\nE0707 08:08:42.451175       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-csq5d.161f69fb89c29fae\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-csq5d\", UID:\"101e52cb-c746-4612-891f-44ee4e4b182d\", APIVersion:\"v1\", ResourceVersion:\"2701\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-csq5d: Error updating latency-svc-csq5d-nbngz EndpointSlice for Service svc-latency-5404/latency-svc-csq5d: endpointslices.discovery.k8s.io \\\"latency-svc-csq5d-nbngz\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb929428acfbbae, ext:190668477964, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb929428acfbbae, ext:190668477964, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-csq5d.161f69fb89c29fae\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nW0707 08:08:43.505972       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-fck7c\", retrying. Error: Error updating latency-svc-fck7c-v792v EndpointSlice for Service svc-latency-5404/latency-svc-fck7c: endpointslices.discovery.k8s.io \"latency-svc-fck7c-v792v\" not found\nI0707 08:08:43.506311       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-fck7c\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-fck7c: Error updating latency-svc-fck7c-v792v EndpointSlice for Service svc-latency-5404/latency-svc-fck7c: endpointslices.discovery.k8s.io \\\"latency-svc-fck7c-v792v\\\" not found\"\nE0707 08:08:43.706571       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-fck7c.161f69fbd8b5e524\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-fck7c\", UID:\"0c5a8cab-40d9-4f3b-9ef2-c3a6f4ed27e5\", APIVersion:\"v1\", ResourceVersion:\"2374\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-fck7c: Error updating latency-svc-fck7c-v792v EndpointSlice for Service svc-latency-5404/latency-svc-fck7c: endpointslices.discovery.k8s.io \\\"latency-svc-fck7c-v792v\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb92942de283724, ext:191993043850, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb92942de283724, ext:191993043850, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-fck7c.161f69fbd8b5e524\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nW0707 08:08:45.387181       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-kxms8\", retrying. Error: Error updating latency-svc-kxms8-8h6fm EndpointSlice for Service svc-latency-5404/latency-svc-kxms8: endpointslices.discovery.k8s.io \"latency-svc-kxms8-8h6fm\" not found\nW0707 08:08:45.388816       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-kgsdc\", retrying. Error: Error updating latency-svc-kgsdc-r8bbs EndpointSlice for Service svc-latency-5404/latency-svc-kgsdc: endpointslices.discovery.k8s.io \"latency-svc-kgsdc-r8bbs\" not found\nI0707 08:08:45.389079       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-kxms8\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-kxms8: Error updating latency-svc-kxms8-8h6fm EndpointSlice for Service svc-latency-5404/latency-svc-kxms8: endpointslices.discovery.k8s.io \\\"latency-svc-kxms8-8h6fm\\\" not found\"\nI0707 08:08:45.389263       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-kgsdc\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-kgsdc: Error updating latency-svc-kgsdc-r8bbs EndpointSlice for Service svc-latency-5404/latency-svc-kgsdc: endpointslices.discovery.k8s.io \\\"latency-svc-kgsdc-r8bbs\\\" not found\"\nE0707 08:08:45.611676       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-kxms8.161f69fc48d6d223\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-kxms8\", UID:\"e1c91c1e-7cec-4aa3-a5da-0869214404bf\", APIVersion:\"v1\", ResourceVersion:\"2171\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-kxms8: Error updating latency-svc-kxms8-8h6fm EndpointSlice for Service svc-latency-5404/latency-svc-kxms8: endpointslices.discovery.k8s.io \\\"latency-svc-kxms8-8h6fm\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb9294357139023, ext:193874249856, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb9294357139023, ext:193874249856, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-kxms8.161f69fc48d6d223\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nE0707 08:08:45.883727       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-kgsdc.161f69fc48efdbfe\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-kgsdc\", UID:\"6a46d1d2-6ea5-4edb-b5cc-ccbae5d71645\", APIVersion:\"v1\", ResourceVersion:\"2806\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-kgsdc: Error updating latency-svc-kgsdc-r8bbs EndpointSlice for Service svc-latency-5404/latency-svc-kgsdc: endpointslices.discovery.k8s.io \\\"latency-svc-kgsdc-r8bbs\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb92943572c99fe, ext:193875890793, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb92943572c99fe, ext:193875890793, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-kgsdc.161f69fc48efdbfe\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nW0707 08:08:46.021381       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-l66sb\", retrying. Error: Error updating latency-svc-l66sb-kvjn7 EndpointSlice for Service svc-latency-5404/latency-svc-l66sb: endpointslices.discovery.k8s.io \"latency-svc-l66sb-kvjn7\" not found\nI0707 08:08:46.022041       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-l66sb\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-l66sb: Error updating latency-svc-l66sb-kvjn7 EndpointSlice for Service svc-latency-5404/latency-svc-l66sb: endpointslices.discovery.k8s.io \\\"latency-svc-l66sb-kvjn7\\\" not found\"\nE0707 08:08:46.300732       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-l66sb.161f69fc6ea3ef61\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-l66sb\", UID:\"b1f7b1fd-d7e2-4d8a-8622-6315c0f72d88\", APIVersion:\"v1\", ResourceVersion:\"1689\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-l66sb: Error updating latency-svc-l66sb-kvjn7 EndpointSlice for Service svc-latency-5404/latency-svc-l66sb: endpointslices.discovery.k8s.io \\\"latency-svc-l66sb-kvjn7\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb929438145e361, ext:194508449229, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb929438145e361, ext:194508449229, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-l66sb.161f69fc6ea3ef61\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nW0707 08:08:46.814409       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-nnhq9\", retrying. Error: Error updating latency-svc-nnhq9-9qnxj EndpointSlice for Service svc-latency-5404/latency-svc-nnhq9: endpointslices.discovery.k8s.io \"latency-svc-nnhq9-9qnxj\" not found\nI0707 08:08:46.815313       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-nnhq9\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-nnhq9: Error updating latency-svc-nnhq9-9qnxj EndpointSlice for Service svc-latency-5404/latency-svc-nnhq9: endpointslices.discovery.k8s.io \\\"latency-svc-nnhq9-9qnxj\\\" not found\"\nE0707 08:08:47.053847       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-nnhq9.161f69fc9de89a0a\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-nnhq9\", UID:\"71ba6e16-97f1-4957-81b3-7eb46ba31ce1\", APIVersion:\"v1\", ResourceVersion:\"2660\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-nnhq9: Error updating latency-svc-nnhq9-9qnxj EndpointSlice for Service svc-latency-5404/latency-svc-nnhq9: endpointslices.discovery.k8s.io \\\"latency-svc-nnhq9-9qnxj\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb92943b08a8e0a, ext:195301478514, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb92943b08a8e0a, ext:195301478514, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-nnhq9.161f69fc9de89a0a\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nE0707 08:08:50.290864       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-6213/pvc-5kxlb: storageclass.storage.k8s.io \"provisioning-6213\" not found\nI0707 08:08:50.291205       1 event.go:291] \"Event occurred\" object=\"provisioning-6213/pvc-5kxlb\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6213\\\" not found\"\nW0707 08:08:51.602090       1 endpointslice_controller.go:284] Error syncing endpoint slices for service \"svc-latency-5404/latency-svc-zxrfk\", retrying. Error: Error updating latency-svc-zxrfk-62xf2 EndpointSlice for Service svc-latency-5404/latency-svc-zxrfk: endpointslices.discovery.k8s.io \"latency-svc-zxrfk-62xf2\" not found\nI0707 08:08:51.602610       1 event.go:291] \"Event occurred\" object=\"svc-latency-5404/latency-svc-zxrfk\" kind=\"Service\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpointSlices\" message=\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-zxrfk: Error updating latency-svc-zxrfk-62xf2 EndpointSlice for Service svc-latency-5404/latency-svc-zxrfk: endpointslices.discovery.k8s.io \\\"latency-svc-zxrfk-62xf2\\\" not found\"\nE0707 08:08:51.902033       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"latency-svc-zxrfk.161f69fdbb46d3d5\", GenerateName:\"\", Namespace:\"svc-latency-5404\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Service\", Namespace:\"svc-latency-5404\", Name:\"latency-svc-zxrfk\", UID:\"96d36fa8-1f2a-4786-af0b-9fd313755eda\", APIVersion:\"v1\", ResourceVersion:\"2692\", FieldPath:\"\"}, Reason:\"FailedToUpdateEndpointSlices\", Message:\"Error updating Endpoint Slices for Service svc-latency-5404/latency-svc-zxrfk: Error updating latency-svc-zxrfk-62xf2 EndpointSlice for Service svc-latency-5404/latency-svc-zxrfk: endpointslices.discovery.k8s.io \\\"latency-svc-zxrfk-62xf2\\\" not found\", Source:v1.EventSource{Component:\"endpoint-slice-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbfb92944e3e2d5d5, ext:200089160261, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb92944e3e2d5d5, ext:200089160261, loc:(*time.Location)(0x6e802e0)}}, Count:1, Type:\"Warning\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"latency-svc-zxrfk.161f69fdbb46d3d5\" is forbidden: unable to create new content in namespace svc-latency-5404 because it is being terminated' (will not retry!)\nI0707 08:08:52.054247       1 namespace_controller.go:185] Namespace has been deleted projected-7189\nE0707 08:08:52.203193       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-6240/pvc-4qmn5: storageclass.storage.k8s.io \"provisioning-6240\" not found\nI0707 08:08:52.203591       1 event.go:291] \"Event occurred\" object=\"provisioning-6240/pvc-4qmn5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6240\\\" not found\"\nI0707 08:08:52.592905       1 event.go:291] \"Event occurred\" object=\"volume-expand-969/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0707 08:08:52.982728       1 event.go:291] \"Event occurred\" object=\"volume-expand-969/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:08:53.181564       1 event.go:291] \"Event occurred\" object=\"volume-expand-969/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:08:54.072495       1 event.go:291] \"Event occurred\" object=\"volume-expand-969/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0707 08:08:54.170476       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:08:54.529516       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:08:54.530243       1 event.go:291] \"Event occurred\" object=\"volume-expand-969/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0707 08:08:54.719209       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:08:54.720023       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nE0707 08:08:54.821476       1 tokens_controller.go:261] error synchronizing serviceaccount webhook-6303-markers/default: secrets \"default-token-p4gnx\" is forbidden: unable to create new content in namespace webhook-6303-markers because it is being terminated\nI0707 08:08:55.572020       1 namespace_controller.go:185] Namespace has been deleted projected-5358\nI0707 08:08:55.845217       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-4480\nE0707 08:08:56.123924       1 tokens_controller.go:261] error synchronizing serviceaccount webhook-6303/default: secrets \"default-token-hkgd2\" is forbidden: unable to create new content in namespace webhook-6303 because it is being terminated\nE0707 08:08:59.187281       1 pv_controller.go:1432] error finding provisioning plugin for claim volumemode-389/pvc-bcgvf: storageclass.storage.k8s.io \"volumemode-389\" not found\nI0707 08:08:59.187492       1 event.go:291] \"Event occurred\" object=\"volumemode-389/pvc-bcgvf\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-389\\\" not found\"\nE0707 08:09:00.008859       1 namespace_controller.go:162] deletion of namespace job-4484 failed: unexpected items still remain in namespace: job-4484 for gvr: /v1, Resource=pods\nI0707 08:09:00.224999       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job failed-jobs-history-limit-1594109340\"\nI0707 08:09:00.315279       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit-1594109340\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: failed-jobs-history-limit-1594109340-ww9sj\"\nI0707 08:09:00.351496       1 cronjob_controller.go:190] Unable to update status for cronjob-2080/failed-jobs-history-limit (rv = 3493): Operation cannot be fulfilled on cronjobs.batch \"failed-jobs-history-limit\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:09:01.706061       1 namespace_controller.go:185] Namespace has been deleted webhook-6303-markers\nE0707 08:09:02.278760       1 namespace_controller.go:162] deletion of namespace job-4484 failed: unexpected items still remain in namespace: job-4484 for gvr: /v1, Resource=pods\nI0707 08:09:02.368246       1 namespace_controller.go:185] Namespace has been deleted webhook-6303\nE0707 08:09:04.096000       1 namespace_controller.go:162] deletion of namespace job-4484 failed: unexpected items still remain in namespace: job-4484 for gvr: /v1, Resource=pods\nI0707 08:09:04.354739       1 namespace_controller.go:185] Namespace has been deleted watch-5484\nE0707 08:09:06.438418       1 namespace_controller.go:162] deletion of namespace job-4484 failed: unexpected items still remain in namespace: job-4484 for gvr: /v1, Resource=pods\nI0707 08:09:08.766037       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:09:08.766595       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:09:08.768006       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nE0707 08:09:09.750395       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:09:10.186922       1 namespace_controller.go:162] deletion of namespace job-4484 failed: unexpected items still remain in namespace: job-4484 for gvr: /v1, Resource=pods\nE0707 08:09:12.800855       1 namespace_controller.go:162] deletion of namespace job-4484 failed: unexpected items still remain in namespace: job-4484 for gvr: /v1, Resource=pods\nE0707 08:09:13.025834       1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-4797/default: secrets \"default-token-6w9k2\" is forbidden: unable to create new content in namespace downward-api-4797 because it is being terminated\nI0707 08:09:15.097391       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0707 08:09:15.117529       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI0707 08:09:15.438135       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nE0707 08:09:15.459055       1 namespace_controller.go:162] deletion of namespace job-4484 failed: unexpected items still remain in namespace: job-4484 for gvr: /v1, Resource=pods\nI0707 08:09:15.613088       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:09:15.613175       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:09:15.654438       1 namespace_controller.go:185] Namespace has been deleted services-5665\nI0707 08:09:15.998283       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit-1594109340\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Warning\" reason=\"BackoffLimitExceeded\" message=\"Job has reached the specified backoff limit\"\nI0707 08:09:15.998342       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit-1594109340\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: failed-jobs-history-limit-1594109340-ww9sj\"\nE0707 08:09:16.900306       1 tokens_controller.go:261] error synchronizing serviceaccount runtimeclass-3991/default: secrets \"default-token-drvbh\" is forbidden: unable to create new content in namespace runtimeclass-3991 because it is being terminated\nE0707 08:09:18.587378       1 namespace_controller.go:162] deletion of namespace job-4484 failed: unexpected items still remain in namespace: job-4484 for gvr: /v1, Resource=pods\nI0707 08:09:20.582281       1 namespace_controller.go:185] Namespace has been deleted downward-api-4797\nI0707 08:09:20.593275       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SawCompletedJob\" message=\"Saw completed job: failed-jobs-history-limit-1594109340, status: Failed\"\nI0707 08:09:20.770936       1 event.go:291] \"Event occurred\" object=\"cronjob-2080/failed-jobs-history-limit\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted job failed-jobs-history-limit-1594109280\"\nI0707 08:09:23.766130       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:09:23.766191       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:09:23.766210       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:09:23.766226       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:09:24.858877       1 namespace_controller.go:185] Namespace has been deleted runtimeclass-3991\nI0707 08:09:26.310944       1 namespace_controller.go:185] Namespace has been deleted downward-api-9206\nI0707 08:09:26.620017       1 namespace_controller.go:185] Namespace has been deleted job-4484\nI0707 08:09:30.365696       1 event.go:291] \"Event occurred\" object=\"gc-8689/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-m26ck\"\nI0707 08:09:30.403388       1 event.go:291] \"Event occurred\" object=\"gc-8689/simpletest.rc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: simpletest.rc-m7dk6\"\nE0707 08:09:30.554473       1 tokens_controller.go:261] error synchronizing serviceaccount cronjob-2080/default: secrets \"default-token-rgcnd\" is forbidden: unable to create new content in namespace cronjob-2080 because it is being terminated\nE0707 08:09:31.231566       1 tokens_controller.go:261] error synchronizing serviceaccount lease-test-859/default: secrets \"default-token-4zh5v\" is forbidden: unable to create new content in namespace lease-test-859 because it is being terminated\nI0707 08:09:31.385324       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-9bd49b56c to 1\"\nI0707 08:09:31.468878       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-595c898897 to 3\"\nI0707 08:09:31.750946       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-k5lln\"\nI0707 08:09:31.753348       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-9bd49b56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-9bd49b56c-zd5rk\"\nI0707 08:09:34.040872       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-6db5448c8b to 0\"\nI0707 08:09:34.159372       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set webserver-595c898897 to 4\"\nI0707 08:09:34.169579       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-6db5448c8b\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-6db5448c8b-jmz95\"\nI0707 08:09:34.170951       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-595c898897\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: webserver-595c898897-kr5bt\"\nE0707 08:09:36.206290       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-6240/default: secrets \"default-token-5crl9\" is forbidden: unable to create new content in namespace provisioning-6240 because it is being terminated\nI0707 08:09:38.337657       1 namespace_controller.go:185] Namespace has been deleted provisioning-7343\nI0707 08:09:38.774646       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:09:38.775654       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:09:38.777778       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:09:39.009026       1 namespace_controller.go:185] Namespace has been deleted lease-test-859\nI0707 08:09:39.510301       1 namespace_controller.go:185] Namespace has been deleted provisioning-6751\nE0707 08:09:39.617605       1 tokens_controller.go:261] error synchronizing serviceaccount gc-4752/default: secrets \"default-token-z28l2\" is forbidden: unable to create new content in namespace gc-4752 because it is being terminated\nI0707 08:09:42.618011       1 namespace_controller.go:185] Namespace has been deleted projected-3006\nE0707 08:09:42.720571       1 tokens_controller.go:261] error synchronizing serviceaccount dns-5278/default: secrets \"default-token-l96wh\" is forbidden: unable to create new content in namespace dns-5278 because it is being terminated\nI0707 08:09:43.708261       1 namespace_controller.go:185] Namespace has been deleted nettest-6163\nI0707 08:09:43.724612       1 namespace_controller.go:185] Namespace has been deleted provisioning-6240\nI0707 08:09:43.755977       1 namespace_controller.go:185] Namespace has been deleted services-3766\nI0707 08:09:46.019714       1 namespace_controller.go:185] Namespace has been deleted gc-4752\nI0707 08:09:46.448650       1 namespace_controller.go:185] Namespace has been deleted svc-latency-5404\nI0707 08:09:46.818431       1 namespace_controller.go:185] Namespace has been deleted cronjob-2080\nE0707 08:09:47.402923       1 namespace_controller.go:162] deletion of namespace pods-8238 failed: unexpected items still remain in namespace: pods-8238 for gvr: /v1, Resource=pods\nE0707 08:09:48.494869       1 namespace_controller.go:162] deletion of namespace pods-8238 failed: unexpected items still remain in namespace: pods-8238 for gvr: /v1, Resource=pods\nI0707 08:09:48.502572       1 namespace_controller.go:185] Namespace has been deleted dns-5278\nE0707 08:09:48.938015       1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-1281/default: secrets \"default-token-s4hwq\" is forbidden: unable to create new content in namespace emptydir-1281 because it is being terminated\nE0707 08:09:49.955090       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-9080/pvc-lzvj4: storageclass.storage.k8s.io \"provisioning-9080\" not found\nI0707 08:09:49.955509       1 event.go:291] \"Event occurred\" object=\"provisioning-9080/pvc-lzvj4\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-9080\\\" not found\"\nE0707 08:09:50.228741       1 namespace_controller.go:162] deletion of namespace pods-8238 failed: unexpected items still remain in namespace: pods-8238 for gvr: /v1, Resource=pods\nE0707 08:09:50.935424       1 namespace_controller.go:162] deletion of namespace pods-8238 failed: unexpected items still remain in namespace: pods-8238 for gvr: /v1, Resource=pods\nE0707 08:09:52.834319       1 namespace_controller.go:162] deletion of namespace pods-8238 failed: unexpected items still remain in namespace: pods-8238 for gvr: /v1, Resource=pods\nE0707 08:09:53.271326       1 tokens_controller.go:261] error synchronizing serviceaccount services-1911/default: secrets \"default-token-7kcxl\" is forbidden: unable to create new content in namespace services-1911 because it is being terminated\nE0707 08:09:53.378790       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:09:54.120448       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:09:54.120488       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:09:54.120506       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:09:54.475729       1 namespace_controller.go:185] Namespace has been deleted emptydir-8346\nE0707 08:09:54.484066       1 namespace_controller.go:162] deletion of namespace pods-8238 failed: unexpected items still remain in namespace: pods-8238 for gvr: /v1, Resource=pods\nI0707 08:09:54.945652       1 namespace_controller.go:185] Namespace has been deleted volume-5852\nI0707 08:09:55.083404       1 namespace_controller.go:185] Namespace has been deleted emptydir-1281\nE0707 08:09:56.643143       1 namespace_controller.go:162] deletion of namespace pods-8238 failed: unexpected items still remain in namespace: pods-8238 for gvr: /v1, Resource=pods\nE0707 08:09:57.155001       1 tokens_controller.go:261] error synchronizing serviceaccount services-6443/default: secrets \"default-token-r95t7\" is forbidden: unable to create new content in namespace services-6443 because it is being terminated\nE0707 08:09:57.334607       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-4067/pvc-mg94b: storageclass.storage.k8s.io \"volume-4067\" not found\nI0707 08:09:57.334983       1 event.go:291] \"Event occurred\" object=\"volume-4067/pvc-mg94b\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-4067\\\" not found\"\nE0707 08:09:58.617447       1 namespace_controller.go:162] deletion of namespace pods-8238 failed: unexpected items still remain in namespace: pods-8238 for gvr: /v1, Resource=pods\nI0707 08:10:00.252560       1 namespace_controller.go:185] Namespace has been deleted services-1911\nI0707 08:10:02.124999       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-rgdc9\"\nI0707 08:10:02.180021       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: condition-test-llqnw\"\nI0707 08:10:02.180069       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-wgpp5\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0707 08:10:02.219519       1 replica_set.go:532] sync \"replication-controller-1029/condition-test\" failed with pods \"condition-test-wgpp5\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0707 08:10:02.222115       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-48bqf\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0707 08:10:02.254651       1 replica_set.go:532] sync \"replication-controller-1029/condition-test\" failed with pods \"condition-test-48bqf\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0707 08:10:02.261647       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-z954p\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0707 08:10:02.295114       1 replica_set.go:532] sync \"replication-controller-1029/condition-test\" failed with pods \"condition-test-z954p\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nE0707 08:10:02.303789       1 replica_set.go:532] sync \"replication-controller-1029/condition-test\" failed with pods \"condition-test-v5nhm\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0707 08:10:02.304308       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-v5nhm\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0707 08:10:02.325650       1 replica_set.go:532] sync \"replication-controller-1029/condition-test\" failed with pods \"condition-test-497lx\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0707 08:10:02.326179       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-497lx\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0707 08:10:02.419696       1 replica_set.go:532] sync \"replication-controller-1029/condition-test\" failed with pods \"condition-test-jdm52\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0707 08:10:02.421223       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-jdm52\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0707 08:10:02.588615       1 replica_set.go:532] sync \"replication-controller-1029/condition-test\" failed with pods \"condition-test-bbvxz\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0707 08:10:02.589764       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-bbvxz\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nE0707 08:10:02.937893       1 replica_set.go:532] sync \"replication-controller-1029/condition-test\" failed with pods \"condition-test-zdhdr\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\nI0707 08:10:02.939038       1 event.go:291] \"Event occurred\" object=\"replication-controller-1029/condition-test\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"Error creating: pods \\\"condition-test-zdhdr\\\" is forbidden: exceeded quota: condition-test, requested: pods=1, used: pods=2, limited: pods=2\"\nI0707 08:10:03.279732       1 namespace_controller.go:185] Namespace has been deleted downward-api-4354\nI0707 08:10:03.283972       1 namespace_controller.go:185] Namespace has been deleted services-6443\nI0707 08:10:03.822882       1 namespace_controller.go:185] Namespace has been deleted metrics-grabber-2500\nI0707 08:10:05.473102       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2915\nI0707 08:10:05.779565       1 namespace_controller.go:185] Namespace has been deleted pods-8238\nI0707 08:10:06.304996       1 namespace_controller.go:185] Namespace has been deleted volumemode-389\nI0707 08:10:08.778560       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:10:08.778703       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:10:08.902910       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:10:09.206425       1 resource_quota_controller.go:306] Resource quota has been deleted replication-controller-1029/condition-test\nI0707 08:10:12.578125       1 namespace_controller.go:185] Namespace has been deleted downward-api-6961\nI0707 08:10:14.425614       1 namespace_controller.go:185] Namespace has been deleted replication-controller-1029\nE0707 08:10:15.492526       1 tokens_controller.go:261] error synchronizing serviceaccount container-lifecycle-hook-2239/default: secrets \"default-token-c8k2w\" is forbidden: unable to create new content in namespace container-lifecycle-hook-2239 because it is being terminated\nI0707 08:10:23.783051       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:10:23.784587       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:10:23.785735       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nE0707 08:10:26.221394       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:10:27.227016       1 tokens_controller.go:261] error synchronizing serviceaccount services-1274/default: secrets \"default-token-sjq6m\" is forbidden: unable to create new content in namespace services-1274 because it is being terminated\nE0707 08:10:28.006492       1 tokens_controller.go:261] error synchronizing serviceaccount persistent-local-volumes-test-4044/default: secrets \"default-token-d2nsz\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-4044 because it is being terminated\nI0707 08:10:31.452908       1 event.go:291] \"Event occurred\" object=\"crd-webhook-8660/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-7478868bd9 to 1\"\nI0707 08:10:31.520008       1 event.go:291] \"Event occurred\" object=\"crd-webhook-8660/sample-crd-conversion-webhook-deployment-7478868bd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-7478868bd9-2bx2d\"\nI0707 08:10:32.924819       1 namespace_controller.go:185] Namespace has been deleted services-1274\nI0707 08:10:33.037489       1 namespace_controller.go:185] Namespace has been deleted emptydir-6904\nE0707 08:10:34.810160       1 tokens_controller.go:261] error synchronizing serviceaccount downward-api-1787/default: secrets \"default-token-xdngv\" is forbidden: unable to create new content in namespace downward-api-1787 because it is being terminated\nE0707 08:10:35.212981       1 tokens_controller.go:261] error synchronizing serviceaccount pv-9231/default: secrets \"default-token-gf6hg\" is forbidden: unable to create new content in namespace pv-9231 because it is being terminated\nE0707 08:10:36.649344       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-9080/default: secrets \"default-token-4zg9j\" is forbidden: unable to create new content in namespace provisioning-9080 because it is being terminated\nE0707 08:10:37.140762       1 pv_controller.go:1432] error finding provisioning plugin for claim persistent-local-volumes-test-8129/pvc-mq446: no volume plugin matched name: kubernetes.io/no-provisioner\nI0707 08:10:37.142456       1 event.go:291] \"Event occurred\" object=\"persistent-local-volumes-test-8129/pvc-mq446\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"no volume plugin matched name: kubernetes.io/no-provisioner\"\nI0707 08:10:38.783133       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:10:38.783242       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:10:38.783570       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nE0707 08:10:40.215900       1 pv_controller.go:1432] error finding provisioning plugin for claim persistent-local-volumes-test-8129/pvc-52lfl: no volume plugin matched name: kubernetes.io/no-provisioner\nI0707 08:10:40.216448       1 event.go:291] \"Event occurred\" object=\"persistent-local-volumes-test-8129/pvc-52lfl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"no volume plugin matched name: kubernetes.io/no-provisioner\"\nI0707 08:10:40.977720       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-4044\nI0707 08:10:41.358498       1 namespace_controller.go:185] Namespace has been deleted pod-disks-1528\nI0707 08:10:41.413838       1 namespace_controller.go:185] Namespace has been deleted downward-api-1787\nI0707 08:10:41.759312       1 namespace_controller.go:185] Namespace has been deleted pv-9231\nE0707 08:10:41.852236       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods\nI0707 08:10:42.419190       1 namespace_controller.go:185] Namespace has been deleted provisioning-9080\nE0707 08:10:42.811693       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods\nE0707 08:10:44.794386       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: [unable to retrieve the complete list of server APIs: stable.example.com/v2: the server could not find the requested resource, unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods]\nI0707 08:10:44.839502       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-2239\nE0707 08:10:46.174807       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods\nE0707 08:10:46.325392       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-6717/pvc-mpztv: storageclass.storage.k8s.io \"volume-6717\" not found\nI0707 08:10:46.325826       1 event.go:291] \"Event occurred\" object=\"volume-6717/pvc-mpztv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-6717\\\" not found\"\nI0707 08:10:46.346921       1 event.go:291] \"Event occurred\" object=\"kubectl-8241/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-pvg72\"\nE0707 08:10:47.647381       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods\nI0707 08:10:48.777188       1 event.go:291] \"Event occurred\" object=\"kubectl-8241/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-pwgzh\"\nE0707 08:10:49.460125       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods\nI0707 08:10:50.324003       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0707 08:10:50.324717       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0707 08:10:50.363219       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0707 08:10:50.410936       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:10:50.886744       1 namespace_controller.go:185] Namespace has been deleted security-context-test-5250\nE0707 08:10:51.143277       1 tokens_controller.go:261] error synchronizing serviceaccount podtemplate-4887/default: secrets \"default-token-z7ntc\" is forbidden: unable to create new content in namespace podtemplate-4887 because it is being terminated\nE0707 08:10:51.445965       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods\nE0707 08:10:52.775715       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods\nI0707 08:10:53.972477       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:10:53.972542       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:10:54.079643       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:10:54.079716       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nE0707 08:10:55.389115       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods\nI0707 08:10:56.091078       1 namespace_controller.go:185] Namespace has been deleted gc-8689\nI0707 08:10:56.315547       1 namespace_controller.go:185] Namespace has been deleted podtemplate-4887\nE0707 08:10:58.706770       1 namespace_controller.go:162] deletion of namespace mount-propagation-4976 failed: unexpected items still remain in namespace: mount-propagation-4976 for gvr: /v1, Resource=pods\nE0707 08:10:59.475078       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-6277/pvc-g6dzl: storageclass.storage.k8s.io \"volume-6277\" not found\nI0707 08:10:59.475448       1 event.go:291] \"Event occurred\" object=\"volume-6277/pvc-g6dzl\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-6277\\\" not found\"\nI0707 08:11:02.621170       1 event.go:291] \"Event occurred\" object=\"volumemode-1445/nfshbt2d\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-volumemode-1445\\\" or manually created by system administrator\"\nI0707 08:11:03.744701       1 namespace_controller.go:185] Namespace has been deleted nettest-1893\nE0707 08:11:04.385754       1 tokens_controller.go:261] error synchronizing serviceaccount persistent-local-volumes-test-8129/default: secrets \"default-token-r9fbt\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-8129 because it is being terminated\nE0707 08:11:05.165960       1 tokens_controller.go:261] error synchronizing serviceaccount nettest-7943/default: secrets \"default-token-pvspr\" is forbidden: unable to create new content in namespace nettest-7943 because it is being terminated\nI0707 08:11:07.810562       1 namespace_controller.go:185] Namespace has been deleted mount-propagation-4976\nE0707 08:11:08.544723       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:11:08.993048       1 event.go:291] \"Event occurred\" object=\"volumemode-1445/nfshbt2d\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-volumemode-1445\\\" or manually created by system administrator\"\nI0707 08:11:08.993115       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:11:08.993152       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:11:08.993183       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nE0707 08:11:09.427185       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-1279/pvc-twxcd: storageclass.storage.k8s.io \"volume-1279\" not found\nI0707 08:11:09.430149       1 event.go:291] \"Event occurred\" object=\"volume-1279/pvc-twxcd\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-1279\\\" not found\"\nI0707 08:11:09.952814       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0707 08:11:09.953451       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-1 Pod ss-1 in StatefulSet ss success\"\nI0707 08:11:09.999130       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0707 08:11:10.236912       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:11:10.238775       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/datadir-ss-1\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:11:11.125162       1 namespace_controller.go:185] Namespace has been deleted volumemode-2369\nI0707 08:11:13.550198       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-webhook-3820-crds.stable.example.com\nI0707 08:11:13.550702       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0707 08:11:13.651121       1 shared_informer.go:247] Caches are synced for resource quota \nI0707 08:11:13.988474       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:11:13.989380       1 shared_informer.go:247] Caches are synced for garbage collector \nE0707 08:11:16.305771       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:11:17.344977       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-8129\nE0707 08:11:17.897364       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:11:19.775195       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:11:22.985291       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nI0707 08:11:23.510018       1 namespace_controller.go:185] Namespace has been deleted secrets-5556\nI0707 08:11:23.805816       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:11:23.805901       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:11:23.806058       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:11:24.171400       1 namespace_controller.go:185] Namespace has been deleted volume-4067\nI0707 08:11:24.458593       1 namespace_controller.go:185] Namespace has been deleted certificates-2905\nE0707 08:11:24.458823       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nE0707 08:11:25.096807       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:11:25.433996       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nI0707 08:11:26.053438       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0707 08:11:26.083192       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-2 Pod ss-2 in StatefulSet ss success\"\nI0707 08:11:26.141066       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0707 08:11:26.249111       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:11:26.249339       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nE0707 08:11:27.079005       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nE0707 08:11:27.323107       1 tokens_controller.go:261] error synchronizing serviceaccount container-runtime-359/default: secrets \"default-token-vpspb\" is forbidden: unable to create new content in namespace container-runtime-359 because it is being terminated\nI0707 08:11:27.545836       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-8660\nE0707 08:11:28.443940       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nE0707 08:11:29.466914       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nE0707 08:11:31.829779       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nI0707 08:11:33.390982       1 namespace_controller.go:185] Namespace has been deleted container-runtime-359\nE0707 08:11:34.328852       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nE0707 08:11:36.838102       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nE0707 08:11:37.699153       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:11:38.805820       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:11:38.805862       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:11:38.805883       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nE0707 08:11:39.065161       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:11:39.225912       1 tokens_controller.go:261] error synchronizing serviceaccount volumemode-8201/default: secrets \"default-token-h5wpc\" is forbidden: unable to create new content in namespace volumemode-8201 because it is being terminated\nI0707 08:11:39.342470       1 namespace_controller.go:185] Namespace has been deleted nettest-7943\nE0707 08:11:39.848540       1 tokens_controller.go:261] error synchronizing serviceaccount init-container-1309/default: secrets \"default-token-7b5gx\" is forbidden: unable to create new content in namespace init-container-1309 because it is being terminated\nI0707 08:11:40.481028       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-7h265\"\nI0707 08:11:40.537980       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-6ht2m\"\nI0707 08:11:40.548499       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-bmw6c\"\nI0707 08:11:40.601230       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-npq4r\"\nI0707 08:11:40.601317       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-tf9fs\"\nI0707 08:11:40.601381       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-74cfr\"\nI0707 08:11:40.664390       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-mj6ht\"\nI0707 08:11:41.243027       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-c9684\"\nI0707 08:11:41.243197       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-ls8cm\"\nI0707 08:11:41.243779       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-7f5zn\"\nE0707 08:11:43.002111       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nI0707 08:11:44.032099       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0707 08:11:44.032146       1 shared_informer.go:247] Caches are synced for resource quota \nI0707 08:11:44.352515       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:11:44.352876       1 shared_informer.go:247] Caches are synced for garbage collector \nE0707 08:11:45.887602       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-2372/default: secrets \"default-token-vb6kr\" is forbidden: unable to create new content in namespace kubectl-2372 because it is being terminated\nI0707 08:11:46.190818       1 namespace_controller.go:185] Namespace has been deleted volumemode-8201\nE0707 08:11:46.733298       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nI0707 08:11:47.785304       1 namespace_controller.go:185] Namespace has been deleted init-container-1309\nE0707 08:11:51.183258       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-7679/default: secrets \"default-token-wn8gf\" is forbidden: unable to create new content in namespace provisioning-7679 because it is being terminated\nI0707 08:11:51.367370       1 namespace_controller.go:185] Namespace has been deleted kubectl-2372\nE0707 08:11:51.424200       1 pv_controller.go:1432] error finding provisioning plugin for claim volumemode-27/pvc-xcpsg: storageclass.storage.k8s.io \"volumemode-27\" not found\nI0707 08:11:51.424843       1 event.go:291] \"Event occurred\" object=\"volumemode-27/pvc-xcpsg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-27\\\" not found\"\nI0707 08:11:52.132274       1 event.go:291] \"Event occurred\" object=\"webhook-4934/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-bcc959585 to 1\"\nI0707 08:11:52.163824       1 event.go:291] \"Event occurred\" object=\"webhook-4934/sample-webhook-deployment-bcc959585\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-bcc959585-xwx8z\"\nE0707 08:11:52.986706       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nI0707 08:11:53.814407       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:11:53.930166       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:11:53.930214       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:11:56.997428       1 namespace_controller.go:185] Namespace has been deleted provisioning-7679\nI0707 08:11:57.279137       1 namespace_controller.go:185] Namespace has been deleted provisioning-1648\nE0707 08:11:57.955401       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-862/default: secrets \"default-token-lghg5\" is forbidden: unable to create new content in namespace kubectl-862 because it is being terminated\nE0707 08:11:59.978607       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:12:03.388370       1 namespace_controller.go:185] Namespace has been deleted kubectl-862\nI0707 08:12:03.533523       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled down replica set webserver-9bd49b56c to 0\"\nI0707 08:12:03.603620       1 event.go:291] \"Event occurred\" object=\"deployment-1755/webserver-9bd49b56c\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: webserver-9bd49b56c-f574t\"\nE0707 08:12:05.939524       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nI0707 08:12:08.132330       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-2 Pod ss-2 in StatefulSet ss success\"\nI0707 08:12:08.133003       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0707 08:12:08.201024       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nI0707 08:12:08.247657       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:12:08.816382       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:12:08.816426       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:12:08.816444       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/datadir-ss-2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:12:08.816559       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:12:08.908279       1 event.go:291] \"Event occurred\" object=\"aggregator-654/sample-apiserver-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-apiserver-deployment-69665d47f8 to 1\"\nI0707 08:12:09.026379       1 event.go:291] \"Event occurred\" object=\"aggregator-654/sample-apiserver-deployment-69665d47f8\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-apiserver-deployment-69665d47f8-7sdf4\"\nE0707 08:12:09.207344       1 tokens_controller.go:261] error synchronizing serviceaccount node-lease-test-278/default: secrets \"default-token-xvjg2\" is forbidden: unable to create new content in namespace node-lease-test-278 because it is being terminated\nE0707 08:12:10.482239       1 tokens_controller.go:261] error synchronizing serviceaccount webhook-4934-markers/default: secrets \"default-token-p75pq\" is forbidden: unable to create new content in namespace webhook-4934-markers because it is being terminated\nI0707 08:12:15.046147       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nI0707 08:12:15.565155       1 namespace_controller.go:185] Namespace has been deleted volume-6717\nI0707 08:12:15.711533       1 namespace_controller.go:185] Namespace has been deleted node-lease-test-278\nI0707 08:12:16.669610       1 namespace_controller.go:185] Namespace has been deleted webhook-4934-markers\nI0707 08:12:16.669941       1 namespace_controller.go:185] Namespace has been deleted volume-7122\nI0707 08:12:17.347383       1 namespace_controller.go:185] Namespace has been deleted webhook-4934\nE0707 08:12:17.706646       1 tokens_controller.go:261] error synchronizing serviceaccount volumemode-1445/default: secrets \"default-token-899pn\" is forbidden: unable to create new content in namespace volumemode-1445 because it is being terminated\nI0707 08:12:23.818417       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:12:23.818752       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:12:23.818849       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:12:24.358179       1 namespace_controller.go:185] Namespace has been deleted volumemode-1445\nI0707 08:12:28.015557       1 namespace_controller.go:185] Namespace has been deleted var-expansion-9514\nI0707 08:12:28.825225       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nE0707 08:12:29.241822       1 namespace_controller.go:162] deletion of namespace kubectl-8241 failed: unexpected items still remain in namespace: kubectl-8241 for gvr: /v1, Resource=pods\nE0707 08:12:30.455647       1 namespace_controller.go:162] deletion of namespace kubelet-test-119 failed: unexpected items still remain in namespace: kubelet-test-119 for gvr: /v1, Resource=pods\nE0707 08:12:32.298447       1 namespace_controller.go:162] deletion of namespace kubelet-test-119 failed: unexpected items still remain in namespace: kubelet-test-119 for gvr: /v1, Resource=pods\nI0707 08:12:33.980149       1 namespace_controller.go:185] Namespace has been deleted container-probe-3849\nI0707 08:12:34.544390       1 namespace_controller.go:185] Namespace has been deleted deployment-1755\nI0707 08:12:34.575951       1 namespace_controller.go:185] Namespace has been deleted security-context-7873\nE0707 08:12:34.975395       1 namespace_controller.go:162] deletion of namespace kubelet-test-119 failed: unexpected items still remain in namespace: kubelet-test-119 for gvr: /v1, Resource=pods\nE0707 08:12:36.921099       1 namespace_controller.go:162] deletion of namespace kubelet-test-119 failed: unexpected items still remain in namespace: kubelet-test-119 for gvr: /v1, Resource=pods\nI0707 08:12:37.821204       1 namespace_controller.go:185] Namespace has been deleted volume-6277\nE0707 08:12:37.945985       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:12:38.828445       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:12:38.828995       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:12:38.829239       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nE0707 08:12:38.991712       1 namespace_controller.go:162] deletion of namespace kubelet-test-119 failed: unexpected items still remain in namespace: kubelet-test-119 for gvr: /v1, Resource=pods\nE0707 08:12:41.059463       1 tokens_controller.go:261] error synchronizing serviceaccount port-forwarding-4650/default: secrets \"default-token-smrmq\" is forbidden: unable to create new content in namespace port-forwarding-4650 because it is being terminated\nE0707 08:12:41.311872       1 namespace_controller.go:162] deletion of namespace kubelet-test-119 failed: unexpected items still remain in namespace: kubelet-test-119 for gvr: /v1, Resource=pods\nI0707 08:12:41.465088       1 namespace_controller.go:185] Namespace has been deleted kubectl-7558\nE0707 08:12:41.674226       1 namespace_controller.go:162] deletion of namespace pods-3789 failed: unexpected items still remain in namespace: pods-3789 for gvr: /v1, Resource=pods\nI0707 08:12:41.995691       1 namespace_controller.go:185] Namespace has been deleted nettest-4514\nI0707 08:12:42.664728       1 event.go:291] \"Event occurred\" object=\"services-9982/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-dl2qk\"\nI0707 08:12:42.942643       1 event.go:291] \"Event occurred\" object=\"services-9982/externalsvc\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalsvc-khk64\"\nE0707 08:12:43.948788       1 namespace_controller.go:162] deletion of namespace kubelet-test-119 failed: unexpected items still remain in namespace: kubelet-test-119 for gvr: /v1, Resource=pods\nE0707 08:12:44.348466       1 namespace_controller.go:162] deletion of namespace pods-3789 failed: unexpected items still remain in namespace: pods-3789 for gvr: /v1, Resource=pods\nI0707 08:12:44.352236       1 namespace_controller.go:185] Namespace has been deleted provisioning-2858\nI0707 08:12:45.193346       1 event.go:291] \"Event occurred\" object=\"services-7017/slow-terminating-unready-pod\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: slow-terminating-unready-pod-8gr9c\"\nI0707 08:12:45.555403       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:12:45.660804       1 shared_informer.go:247] Caches are synced for garbage collector \nE0707 08:12:46.215181       1 namespace_controller.go:162] deletion of namespace kubelet-test-119 failed: unexpected items still remain in namespace: kubelet-test-119 for gvr: /v1, Resource=pods\nE0707 08:12:46.253638       1 namespace_controller.go:162] deletion of namespace pods-3789 failed: unexpected items still remain in namespace: pods-3789 for gvr: /v1, Resource=pods\nE0707 08:12:47.360120       1 namespace_controller.go:162] deletion of namespace pods-3789 failed: unexpected items still remain in namespace: pods-3789 for gvr: /v1, Resource=pods\nE0707 08:12:48.998060       1 namespace_controller.go:162] deletion of namespace kubelet-test-119 failed: unexpected items still remain in namespace: kubelet-test-119 for gvr: /v1, Resource=pods\nE0707 08:12:49.018912       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-6213/default: secrets \"default-token-b4rv9\" is forbidden: unable to create new content in namespace provisioning-6213 because it is being terminated\nE0707 08:12:50.230213       1 namespace_controller.go:162] deletion of namespace pods-3789 failed: unexpected items still remain in namespace: pods-3789 for gvr: /v1, Resource=pods\nI0707 08:12:50.729613       1 namespace_controller.go:185] Namespace has been deleted container-probe-1757\nE0707 08:12:50.735856       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:12:52.860581       1 pv_controller.go:1432] error finding provisioning plugin for claim provisioning-6372/pvc-wmt7k: storageclass.storage.k8s.io \"provisioning-6372\" not found\nI0707 08:12:52.860895       1 event.go:291] \"Event occurred\" object=\"provisioning-6372/pvc-wmt7k\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"provisioning-6372\\\" not found\"\nE0707 08:12:53.373211       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:12:53.413879       1 namespace_controller.go:162] deletion of namespace pods-3789 failed: unexpected items still remain in namespace: pods-3789 for gvr: /v1, Resource=pods\nI0707 08:12:53.942042       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:12:53.942075       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:12:53.942090       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nE0707 08:12:54.109092       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nE0707 08:12:54.453830       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:12:54.662875       1 namespace_controller.go:185] Namespace has been deleted var-expansion-5159\nE0707 08:12:56.135562       1 namespace_controller.go:162] deletion of namespace pods-3789 failed: unexpected items still remain in namespace: pods-3789 for gvr: /v1, Resource=pods\nE0707 08:12:56.147749       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nE0707 08:12:56.766202       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:12:58.054922       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nE0707 08:12:58.333932       1 namespace_controller.go:162] deletion of namespace pods-3789 failed: unexpected items still remain in namespace: pods-3789 for gvr: /v1, Resource=pods\nI0707 08:12:58.528700       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-119\nE0707 08:12:58.681513       1 tokens_controller.go:261] error synchronizing serviceaccount gc-5243/default: secrets \"default-token-rfm9l\" is forbidden: unable to create new content in namespace gc-5243 because it is being terminated\nI0707 08:12:58.938886       1 namespace_controller.go:185] Namespace has been deleted volume-1279\nE0707 08:12:59.450536       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nI0707 08:12:59.511686       1 namespace_controller.go:185] Namespace has been deleted provisioning-6213\nI0707 08:12:59.985659       1 namespace_controller.go:185] Namespace has been deleted volumemode-27\nE0707 08:13:00.922095       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nE0707 08:13:01.952433       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nE0707 08:13:02.229028       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:13:03.468197       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nI0707 08:13:04.152407       1 namespace_controller.go:185] Namespace has been deleted gc-5243\nE0707 08:13:04.692410       1 pv_controller.go:1432] error finding provisioning plugin for claim persistent-local-volumes-test-1070/pvc-4nrd2: no volume plugin matched name: kubernetes.io/no-provisioner\nI0707 08:13:04.692839       1 event.go:291] \"Event occurred\" object=\"persistent-local-volumes-test-1070/pvc-4nrd2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"no volume plugin matched name: kubernetes.io/no-provisioner\"\nI0707 08:13:05.666093       1 namespace_controller.go:185] Namespace has been deleted pods-3789\nI0707 08:13:06.407117       1 event.go:291] \"Event occurred\" object=\"volume-3756/nfsj4bxh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-volume-3756\\\" or manually created by system administrator\"\nI0707 08:13:06.408091       1 event.go:291] \"Event occurred\" object=\"volume-3756/nfsj4bxh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-volume-3756\\\" or manually created by system administrator\"\nE0707 08:13:06.828625       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nI0707 08:13:09.037328       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nI0707 08:13:09.037645       1 event.go:291] \"Event occurred\" object=\"volume-3756/nfsj4bxh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-volume-3756\\\" or manually created by system administrator\"\nI0707 08:13:09.037693       1 event.go:291] \"Event occurred\" object=\"volume-expand-5129/csi-hostpathf48sv\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5129\\\" or manually created by system administrator\"\nI0707 08:13:09.037722       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nE0707 08:13:10.079116       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-3191/csi-attacher: secrets \"csi-attacher-token-zzr5t\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:10.110779       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-3191/csi-provisioner: secrets \"csi-provisioner-token-8djsx\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nI0707 08:13:10.817278       1 event.go:291] \"Event occurred\" object=\"replication-controller-8053/my-hostname-basic-e13b9b32-247a-4ced-a689-9c485c0e574b\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: my-hostname-basic-e13b9b32-247a-4ced-a689-9c485c0e574b-zqsl7\"\nI0707 08:13:10.820463       1 namespace_controller.go:185] Namespace has been deleted init-container-2410\nI0707 08:13:10.856116       1 event.go:291] \"Event occurred\" object=\"provisioning-9205/csi-hostpathc729h\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9205\\\" or manually created by system administrator\"\nE0707 08:13:10.851790       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-3191/csi-snapshotter: secrets \"csi-snapshotter-token-n8zp9\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:10.896154       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-3191/csi-resizer: secrets \"csi-resizer-token-6gpxg\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:11.066120       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-3191/default: secrets \"default-token-rc7ks\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:11.898924       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nE0707 08:13:13.299933       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:13:14.443700       1 event.go:291] \"Event occurred\" object=\"volume-9115/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0707 08:13:14.910827       1 event.go:291] \"Event occurred\" object=\"volume-9115/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:13:15.485093       1 event.go:291] \"Event occurred\" object=\"volume-9115/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:13:16.083429       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:13:16.083646       1 shared_informer.go:247] Caches are synced for garbage collector \nI0707 08:13:16.429345       1 event.go:291] \"Event occurred\" object=\"volume-9115/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nE0707 08:13:16.441557       1 tokens_controller.go:261] error synchronizing serviceaccount persistent-local-volumes-test-1972/default: secrets \"default-token-lmp76\" is forbidden: unable to create new content in namespace persistent-local-volumes-test-1972 because it is being terminated\nI0707 08:13:16.855646       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-4650\nI0707 08:13:16.861375       1 event.go:291] \"Event occurred\" object=\"volume-9115/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0707 08:13:17.245116       1 event.go:291] \"Event occurred\" object=\"volume-6332/csi-hostpathzg8ws\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-6332\\\" or manually created by system administrator\"\nE0707 08:13:17.345078       1 tokens_controller.go:261] error synchronizing serviceaccount discovery-2101/default: secrets \"default-token-qwvgt\" is forbidden: unable to create new content in namespace discovery-2101 because it is being terminated\nI0707 08:13:19.154989       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5717/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nE0707 08:13:19.212440       1 namespace_controller.go:162] deletion of namespace disruption-8356 failed: unexpected items still remain in namespace: disruption-8356 for gvr: /v1, Resource=pods\nI0707 08:13:19.227926       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5717/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0707 08:13:19.356320       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:13:19.360872       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nE0707 08:13:20.545336       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-4573/default: secrets \"default-token-flclk\" is forbidden: unable to create new content in namespace kubectl-4573 because it is being terminated\nE0707 08:13:21.029116       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:13:22.361111       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.374077       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.421729       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.445058       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.451440       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.460037       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.489788       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.500943       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.521751       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.521789       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.537616       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nI0707 08:13:22.559779       1 namespace_controller.go:185] Namespace has been deleted kubectl-8241\nE0707 08:13:22.589222       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.589916       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.594078       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.617606       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.643557       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.666151       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.685089       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.703570       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.703697       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.707958       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.720150       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.730017       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.735723       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.764017       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.770727       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.774231       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.775003       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.809741       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.863386       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.868758       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.896753       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:22.943844       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:23.034522       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:23.068864       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:23.102153       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:23.198043       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:23.270467       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:23.370872       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:23.392786       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:23.751549       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nI0707 08:13:23.763433       1 request.go:645] Throttling request took 1.052193601s, request: GET:https://kind-control-plane:6443/apis/networking.k8s.io/v1beta1?timeout=32s\nI0707 08:13:23.857927       1 event.go:291] \"Event occurred\" object=\"volume-3756/nfsj4bxh\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-volume-3756\\\" or manually created by system administrator\"\nI0707 08:13:23.858362       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:13:23.858537       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nI0707 08:13:23.858710       1 event.go:291] \"Event occurred\" object=\"volume-6332/csi-hostpathzg8ws\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-6332\\\" or manually created by system administrator\"\nE0707 08:13:23.866098       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:23.913417       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:24.016045       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:24.049157       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:25.035112       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-attacher, requeuing: controllerrevisions.apps \"csi-hostpath-attacher-576bf7b7bb\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:25.165282       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-provisioner, requeuing: controllerrevisions.apps \"csi-hostpath-provisioner-84c674d695\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:25.228677       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-resizer, requeuing: controllerrevisions.apps \"csi-hostpath-resizer-6fc9d7c89f\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nI0707 08:13:25.296003       1 namespace_controller.go:185] Namespace has been deleted discovery-2101\nE0707 08:13:25.300761       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpath-snapshotter, requeuing: controllerrevisions.apps \"csi-hostpath-snapshotter-6678bf4b9c\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nE0707 08:13:25.335782       1 stateful_set.go:392] error syncing StatefulSet provisioning-3191/csi-hostpathplugin, requeuing: controllerrevisions.apps \"csi-hostpathplugin-5ffdf4c986\" is forbidden: unable to create new content in namespace provisioning-3191 because it is being terminated\nI0707 08:13:25.796155       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpath-attacher\nI0707 08:13:25.796830       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpath-provisioner\nI0707 08:13:25.862097       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpath-resizer\nI0707 08:13:25.893104       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpath-snapshotter\nI0707 08:13:25.897296       1 namespace_controller.go:185] Namespace has been deleted kubectl-8363\nI0707 08:13:25.956962       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpathplugin\nE0707 08:13:26.646359       1 replica_set.go:201] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{slow-terminating-unready-pod  services-7017 /api/v1/namespaces/services-7017/replicationcontrollers/slow-terminating-unready-pod 6e48a2e6-4be0-43f4-9118-6049c4bb95a9 10738 2 2020-07-07 08:12:45 +0000 UTC <nil> <nil> map[name:slow-terminating-unready-pod testid:tolerate-unready-f3f2041b-11da-499f-bcf3-4beef723cbfc] map[] [] []  [{e2e.test Update v1 2020-07-07 08:12:45 +0000 UTC FieldsV1 {\"f:metadata\":{\"f:labels\":{\".\":{},\"f:name\":{},\"f:testid\":{}}},\"f:spec\":{\"f:replicas\":{},\"f:selector\":{\".\":{},\"f:name\":{}},\"f:template\":{\".\":{},\"f:metadata\":{\".\":{},\"f:creationTimestamp\":{},\"f:labels\":{\".\":{},\"f:name\":{},\"f:testid\":{}}},\"f:spec\":{\".\":{},\"f:containers\":{\".\":{},\"k:{\\\"name\\\":\\\"slow-terminating-unready-pod\\\"}\":{\".\":{},\"f:args\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:lifecycle\":{\".\":{},\"f:preStop\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}}}},\"f:name\":{},\"f:ports\":{\".\":{},\"k:{\\\"containerPort\\\":80,\\\"protocol\\\":\\\"TCP\\\"}\":{\".\":{},\"f:containerPort\":{},\"f:protocol\":{}}},\"f:readinessProbe\":{\".\":{},\"f:exec\":{\".\":{},\"f:command\":{}},\"f:failureThreshold\":{},\"f:periodSeconds\":{},\"f:successThreshold\":{},\"f:timeoutSeconds\":{}},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}}} {kube-controller-manager Update v1 2020-07-07 08:12:45 +0000 UTC FieldsV1 {\"f:status\":{\"f:fullyLabeledReplicas\":{},\"f:observedGeneration\":{},\"f:replicas\":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: slow-terminating-unready-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:slow-terminating-unready-pod testid:tolerate-unready-f3f2041b-11da-499f-bcf3-4beef723cbfc] map[] [] []  []} {[] [] [{slow-terminating-unready-pod us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.20 [] [netexec --http-port=80]  [{ 0 80 TCP }] [] [] {map[] map[]} [] [] nil Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/false],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil &Lifecycle{PostStart:nil,PreStop:&Handler{Exec:&ExecAction{Command:[/bin/sleep 600],},HTTPGet:nil,TCPSocket:nil,},} /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00291fc68 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}\nI0707 08:13:26.767422       1 event.go:291] \"Event occurred\" object=\"services-7017/slow-terminating-unready-pod\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted pod: slow-terminating-unready-pod-8gr9c\"\nI0707 08:13:27.501205       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-1972\nI0707 08:13:27.604835       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpath-attacher\nI0707 08:13:27.727020       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpath-provisioner\nI0707 08:13:27.801698       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpath-resizer\nI0707 08:13:27.862436       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpath-snapshotter\nI0707 08:13:27.896879       1 stateful_set.go:419] StatefulSet has been deleted provisioning-3191/csi-hostpathplugin\nI0707 08:13:28.409903       1 namespace_controller.go:185] Namespace has been deleted clientset-206\nI0707 08:13:29.381178       1 namespace_controller.go:185] Namespace has been deleted disruption-8356\nI0707 08:13:29.757970       1 namespace_controller.go:185] Namespace has been deleted provisioning-9205\nI0707 08:13:31.222043       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-133\nE0707 08:13:31.811694       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:13:34.850226       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-9556/default: secrets \"default-token-sss6m\" is forbidden: unable to create new content in namespace kubectl-9556 because it is being terminated\nE0707 08:13:35.844954       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:13:36.002373       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nE0707 08:13:36.161804       1 tokens_controller.go:261] error synchronizing serviceaccount csi-mock-volumes-4912/default: secrets \"default-token-w4mtc\" is forbidden: unable to create new content in namespace csi-mock-volumes-4912 because it is being terminated\nI0707 08:13:36.522608       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4912/pvc-46nwj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\"\nE0707 08:13:36.696799       1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"pvc-46nwj.161f69f82133ee02\", GenerateName:\"\", Namespace:\"csi-mock-volumes-4912\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:\"\", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"PersistentVolumeClaim\", Namespace:\"csi-mock-volumes-4912\", Name:\"pvc-46nwj\", UID:\"038aa7f7-94ee-4130-b8e1-b5ccd3c6f100\", APIVersion:\"v1\", ResourceVersion:\"10940\", FieldPath:\"\"}, Reason:\"ExternalProvisioning\", Message:\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-4912\\\" or manually created by system administrator\", Source:v1.EventSource{Component:\"persistentvolume-controller\", Host:\"\"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729706107, loc:(*time.Location)(0x6e802e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbfb9298c1f1df058, ext:485009147593, loc:(*time.Location)(0x6e802e0)}}, Count:22, Type:\"Normal\", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events \"pvc-46nwj.161f69f82133ee02\" is forbidden: unable to create new content in namespace csi-mock-volumes-4912 because it is being terminated' (will not retry!)\nI0707 08:13:38.866216       1 event.go:291] \"Event occurred\" object=\"volume-6332/csi-hostpathzg8ws\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-6332\\\" or manually created by system administrator\"\nI0707 08:13:38.866569       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:13:43.117438       1 namespace_controller.go:185] Namespace has been deleted kubectl-9556\nI0707 08:13:43.672489       1 namespace_controller.go:185] Namespace has been deleted replication-controller-8053\nI0707 08:13:44.763483       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-1070\nE0707 08:13:48.519449       1 tokens_controller.go:261] error synchronizing serviceaccount projected-2108/default: secrets \"default-token-9crww\" is forbidden: unable to create new content in namespace projected-2108 because it is being terminated\nE0707 08:13:48.589347       1 namespace_controller.go:162] deletion of namespace kubectl-4573 failed: unexpected items still remain in namespace: kubectl-4573 for gvr: /v1, Resource=pods\nI0707 08:13:48.948064       1 namespace_controller.go:185] Namespace has been deleted secrets-34\nE0707 08:13:50.499374       1 tokens_controller.go:261] error synchronizing serviceaccount metrics-grabber-2274/default: secrets \"default-token-qnng8\" is forbidden: unable to create new content in namespace metrics-grabber-2274 because it is being terminated\nI0707 08:13:50.526610       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-4912\nI0707 08:13:51.629454       1 event.go:291] \"Event occurred\" object=\"ephemeral-4497/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0707 08:13:52.185031       1 event.go:291] \"Event occurred\" object=\"ephemeral-4497/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:13:52.566370       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-9874/csi-mockplugin\nI0707 08:13:53.078012       1 event.go:291] \"Event occurred\" object=\"ephemeral-4497/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:13:53.419844       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-9874/csi-mockplugin-attacher\nI0707 08:13:53.876670       1 event.go:291] \"Event occurred\" object=\"volume-6332/csi-hostpathzg8ws\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-6332\\\" or manually created by system administrator\"\nI0707 08:13:53.876711       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:13:53.955740       1 event.go:291] \"Event occurred\" object=\"ephemeral-4497/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0707 08:13:54.213354       1 event.go:291] \"Event occurred\" object=\"ephemeral-4497/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nE0707 08:13:54.949702       1 namespace_controller.go:162] deletion of namespace kubectl-4573 failed: unexpected items still remain in namespace: kubectl-4573 for gvr: /v1, Resource=pods\nE0707 08:13:57.677525       1 namespace_controller.go:162] deletion of namespace provisioning-3191 failed: unexpected items still remain in namespace: provisioning-3191 for gvr: /v1, Resource=pods\nE0707 08:13:58.791122       1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-6630/default: secrets \"default-token-8s2pl\" is forbidden: unable to create new content in namespace emptydir-6630 because it is being terminated\nI0707 08:14:00.263473       1 namespace_controller.go:185] Namespace has been deleted projected-2108\nE0707 08:14:01.433643       1 tokens_controller.go:261] error synchronizing serviceaccount csi-mock-volumes-9874/default: secrets \"default-token-xs97t\" is forbidden: unable to create new content in namespace csi-mock-volumes-9874 because it is being terminated\nI0707 08:14:01.518684       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0707 08:14:02.398777       1 namespace_controller.go:185] Namespace has been deleted metrics-grabber-2274\nE0707 08:14:02.460842       1 tokens_controller.go:261] error synchronizing serviceaccount services-7017/default: secrets \"default-token-4b5tv\" is forbidden: unable to create new content in namespace services-7017 because it is being terminated\nI0707 08:14:03.238781       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-2979\nE0707 08:14:04.556415       1 tokens_controller.go:261] error synchronizing serviceaccount volume-expand-5129/default: secrets \"default-token-zzcvf\" is forbidden: unable to create new content in namespace volume-expand-5129 because it is being terminated\nE0707 08:14:07.067828       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:14:07.259052       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:14:08.215681       1 tokens_controller.go:261] error synchronizing serviceaccount projected-9963/default: secrets \"default-token-jnbjk\" is forbidden: unable to create new content in namespace projected-9963 because it is being terminated\nI0707 08:14:08.889388       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:14:08.891831       1 event.go:291] \"Event occurred\" object=\"volume-6332/csi-hostpathzg8ws\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-6332\\\" or manually created by system administrator\"\nI0707 08:14:09.275408       1 namespace_controller.go:185] Namespace has been deleted kubectl-4573\nI0707 08:14:09.538070       1 namespace_controller.go:185] Namespace has been deleted emptydir-6630\nE0707 08:14:09.601415       1 tokens_controller.go:261] error synchronizing serviceaccount custom-resource-definition-6397/default: secrets \"default-token-kcmbp\" is forbidden: unable to create new content in namespace custom-resource-definition-6397 because it is being terminated\nI0707 08:14:09.756793       1 event.go:291] \"Event occurred\" object=\"ephemeral-934/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0707 08:14:09.889384       1 event.go:291] \"Event occurred\" object=\"ephemeral-934/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:14:09.935053       1 event.go:291] \"Event occurred\" object=\"ephemeral-934/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:14:10.217355       1 event.go:291] \"Event occurred\" object=\"ephemeral-934/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0707 08:14:10.350401       1 namespace_controller.go:185] Namespace has been deleted services-7017\nI0707 08:14:10.568791       1 event.go:291] \"Event occurred\" object=\"ephemeral-934/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0707 08:14:11.168551       1 namespace_controller.go:185] Namespace has been deleted provisioning-3191\nI0707 08:14:12.398250       1 namespace_controller.go:185] Namespace has been deleted volume-expand-5129\nI0707 08:14:13.564738       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0707 08:14:14.429711       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-969/csi-hostpath-attacher\nE0707 08:14:14.646024       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:14:15.554884       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-969/csi-hostpathplugin\nE0707 08:14:15.733670       1 pv_controller.go:1432] error finding provisioning plugin for claim volumemode-5115/pvc-nzmvs: storageclass.storage.k8s.io \"volumemode-5115\" not found\nI0707 08:14:15.734015       1 event.go:291] \"Event occurred\" object=\"volumemode-5115/pvc-nzmvs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-5115\\\" not found\"\nI0707 08:14:16.660070       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-969/csi-hostpath-provisioner\nI0707 08:14:17.103596       1 event.go:291] \"Event occurred\" object=\"webhook-6950/sample-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-webhook-deployment-bcc959585 to 1\"\nI0707 08:14:17.356528       1 event.go:291] \"Event occurred\" object=\"webhook-6950/sample-webhook-deployment-bcc959585\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-webhook-deployment-bcc959585-86cwz\"\nE0707 08:14:17.359894       1 tokens_controller.go:261] error synchronizing serviceaccount container-lifecycle-hook-277/default: secrets \"default-token-l4m5z\" is forbidden: unable to create new content in namespace container-lifecycle-hook-277 because it is being terminated\nI0707 08:14:17.374044       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-9874\nI0707 08:14:17.522872       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-969/csi-hostpath-resizer\nI0707 08:14:17.671984       1 event.go:291] \"Event occurred\" object=\"disruption-5577/rs\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: rs-mjp4r\"\nI0707 08:14:18.495607       1 stateful_set.go:419] StatefulSet has been deleted volume-expand-969/csi-hostpath-snapshotter\nI0707 08:14:18.567016       1 namespace_controller.go:185] Namespace has been deleted projected-9963\nI0707 08:14:20.052633       1 namespace_controller.go:185] Namespace has been deleted custom-resource-definition-6397\nE0707 08:14:22.726082       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-99/pvc-x9nr6: storageclass.storage.k8s.io \"volume-99\" not found\nI0707 08:14:22.726449       1 event.go:291] \"Event occurred\" object=\"volume-99/pvc-x9nr6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-99\\\" not found\"\nI0707 08:14:23.004131       1 namespace_controller.go:185] Namespace has been deleted crictl-8863\nI0707 08:14:23.192697       1 namespace_controller.go:185] Namespace has been deleted services-9982\nI0707 08:14:24.386351       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:14:24.386392       1 event.go:291] \"Event occurred\" object=\"volume-6332/csi-hostpathzg8ws\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-6332\\\" or manually created by system administrator\"\nI0707 08:14:26.105761       1 namespace_controller.go:185] Namespace has been deleted mounted-volume-expand-4751\nE0707 08:14:26.402469       1 tokens_controller.go:261] error synchronizing serviceaccount volume-6717/default: secrets \"default-token-dhnz7\" is forbidden: unable to create new content in namespace volume-6717 because it is being terminated\nI0707 08:14:26.930268       1 namespace_controller.go:185] Namespace has been deleted provisioning-6372\nI0707 08:14:26.934111       1 namespace_controller.go:185] Namespace has been deleted replication-controller-8376\nI0707 08:14:31.344321       1 event.go:291] \"Event occurred\" object=\"provisioning-5269/nfsqgxvw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-5269\\\" or manually created by system administrator\"\nI0707 08:14:31.345978       1 event.go:291] \"Event occurred\" object=\"provisioning-5269/nfsqgxvw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-5269\\\" or manually created by system administrator\"\nE0707 08:14:32.111334       1 disruption.go:505] Error syncing PodDisruptionBudget disruption-5577/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nE0707 08:14:32.333908       1 tokens_controller.go:261] error synchronizing serviceaccount volume-expand-969/default: secrets \"default-token-8rhlq\" is forbidden: unable to create new content in namespace volume-expand-969 because it is being terminated\nI0707 08:14:35.201083       1 namespace_controller.go:185] Namespace has been deleted multi-az-9691\nE0707 08:14:36.237908       1 tokens_controller.go:261] error synchronizing serviceaccount containers-9475/default: secrets \"default-token-lzbk7\" is forbidden: unable to create new content in namespace containers-9475 because it is being terminated\nI0707 08:14:36.525769       1 namespace_controller.go:185] Namespace has been deleted volume-6717\nI0707 08:14:38.119029       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-7398\nI0707 08:14:38.919129       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:14:38.929237       1 event.go:291] \"Event occurred\" object=\"volume-6332/csi-hostpathzg8ws\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-6332\\\" or manually created by system administrator\"\nI0707 08:14:38.929329       1 event.go:291] \"Event occurred\" object=\"provisioning-5269/nfsqgxvw\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-5269\\\" or manually created by system administrator\"\nE0707 08:14:39.781566       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-3560/pvc-g8t26: storageclass.storage.k8s.io \"volume-3560\" not found\nI0707 08:14:39.782501       1 event.go:291] \"Event occurred\" object=\"volume-3560/pvc-g8t26\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-3560\\\" not found\"\nI0707 08:14:41.410747       1 namespace_controller.go:185] Namespace has been deleted containers-9475\nI0707 08:14:42.295502       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0707 08:14:42.321886       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success\"\nI0707 08:14:42.380600       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0707 08:14:42.449988       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nE0707 08:14:44.939155       1 tokens_controller.go:261] error synchronizing serviceaccount webhook-6950/default: secrets \"default-token-cw5xp\" is forbidden: unable to create new content in namespace webhook-6950 because it is being terminated\nE0707 08:14:45.872930       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:14:46.445025       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nE0707 08:14:46.706797       1 namespace_controller.go:162] deletion of namespace disruption-5525 failed: unexpected items still remain in namespace: disruption-5525 for gvr: /v1, Resource=pods\nE0707 08:14:47.652721       1 namespace_controller.go:162] deletion of namespace nettest-2829 failed: unexpected items still remain in namespace: nettest-2829 for gvr: /v1, Resource=pods\nE0707 08:14:48.075708       1 tokens_controller.go:261] error synchronizing serviceaccount container-probe-373/default: secrets \"default-token-xl8g6\" is forbidden: unable to create new content in namespace container-probe-373 because it is being terminated\nE0707 08:14:48.401387       1 tokens_controller.go:261] error synchronizing serviceaccount secrets-863/default: secrets \"default-token-n2hwz\" is forbidden: unable to create new content in namespace secrets-863 because it is being terminated\nE0707 08:14:48.528575       1 tokens_controller.go:261] error synchronizing serviceaccount projected-6553/default: secrets \"default-token-cr4l9\" is forbidden: unable to create new content in namespace projected-6553 because it is being terminated\nI0707 08:14:48.549858       1 namespace_controller.go:185] Namespace has been deleted tables-6506\nI0707 08:14:48.556548       1 resource_quota_controller.go:306] Resource quota has been deleted resourcequota-4143/test-quota\nE0707 08:14:49.381284       1 namespace_controller.go:162] deletion of namespace container-lifecycle-hook-277 failed: unexpected items still remain in namespace: container-lifecycle-hook-277 for gvr: /v1, Resource=pods\nE0707 08:14:49.774600       1 namespace_controller.go:162] deletion of namespace disruption-5525 failed: unexpected items still remain in namespace: disruption-5525 for gvr: /v1, Resource=pods\nE0707 08:14:50.652875       1 tokens_controller.go:261] error synchronizing serviceaccount configmap-9297/default: secrets \"default-token-zbnnl\" is forbidden: unable to create new content in namespace configmap-9297 because it is being terminated\nE0707 08:14:50.675675       1 namespace_controller.go:162] deletion of namespace nettest-2829 failed: unexpected items still remain in namespace: nettest-2829 for gvr: /v1, Resource=pods\nE0707 08:14:52.157603       1 namespace_controller.go:162] deletion of namespace container-lifecycle-hook-277 failed: unexpected items still remain in namespace: container-lifecycle-hook-277 for gvr: /v1, Resource=pods\nE0707 08:14:52.454749       1 namespace_controller.go:162] deletion of namespace disruption-5525 failed: unexpected items still remain in namespace: disruption-5525 for gvr: /v1, Resource=pods\nI0707 08:14:52.526793       1 namespace_controller.go:185] Namespace has been deleted webhook-6950-markers\nI0707 08:14:52.751213       1 namespace_controller.go:185] Namespace has been deleted init-container-2357\nI0707 08:14:53.907728       1 event.go:291] \"Event occurred\" object=\"volume-6332/csi-hostpathzg8ws\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-6332\\\" or manually created by system administrator\"\nI0707 08:14:53.908190       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/datadir-ss-0\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:14:53.908369       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nE0707 08:14:53.986847       1 namespace_controller.go:162] deletion of namespace container-lifecycle-hook-277 failed: unexpected items still remain in namespace: container-lifecycle-hook-277 for gvr: /v1, Resource=pods\nE0707 08:14:54.528538       1 tokens_controller.go:261] error synchronizing serviceaccount resourcequota-4143/default: secrets \"default-token-cpnph\" is forbidden: unable to create new content in namespace resourcequota-4143 because it is being terminated\nE0707 08:14:54.542933       1 namespace_controller.go:162] deletion of namespace disruption-5525 failed: unexpected items still remain in namespace: disruption-5525 for gvr: /v1, Resource=pods\nI0707 08:14:54.843225       1 namespace_controller.go:185] Namespace has been deleted secrets-863\nI0707 08:14:55.392304       1 namespace_controller.go:185] Namespace has been deleted container-probe-373\nI0707 08:14:55.416870       1 namespace_controller.go:185] Namespace has been deleted projected-6553\nE0707 08:14:55.558075       1 pv_controller.go:1432] error finding provisioning plugin for claim volumemode-5937/pvc-gf2p7: storageclass.storage.k8s.io \"volumemode-5937\" not found\nI0707 08:14:55.559637       1 event.go:291] \"Event occurred\" object=\"volumemode-5937/pvc-gf2p7\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volumemode-5937\\\" not found\"\nE0707 08:14:55.716239       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:14:56.132108       1 namespace_controller.go:185] Namespace has been deleted var-expansion-7155\nE0707 08:14:56.545674       1 namespace_controller.go:162] deletion of namespace container-lifecycle-hook-277 failed: unexpected items still remain in namespace: container-lifecycle-hook-277 for gvr: /v1, Resource=pods\nE0707 08:14:56.950389       1 namespace_controller.go:162] deletion of namespace disruption-5525 failed: unexpected items still remain in namespace: disruption-5525 for gvr: /v1, Resource=pods\nE0707 08:14:57.397317       1 namespace_controller.go:162] deletion of namespace container-lifecycle-hook-277 failed: unexpected items still remain in namespace: container-lifecycle-hook-277 for gvr: /v1, Resource=pods\nI0707 08:14:57.894163       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0707 08:14:57.920432       1 namespace_controller.go:185] Namespace has been deleted configmap-9297\nI0707 08:14:58.066678       1 namespace_controller.go:185] Namespace has been deleted nettest-2829\nI0707 08:15:00.261130       1 namespace_controller.go:185] Namespace has been deleted webhook-6950\nI0707 08:15:00.372317       1 namespace_controller.go:185] Namespace has been deleted downward-api-10\nE0707 08:15:01.024032       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:15:01.121531       1 namespace_controller.go:162] deletion of namespace disruption-5525 failed: unexpected items still remain in namespace: disruption-5525 for gvr: /v1, Resource=pods\nI0707 08:15:01.535544       1 namespace_controller.go:185] Namespace has been deleted resourcequota-4143\nI0707 08:15:01.822865       1 namespace_controller.go:185] Namespace has been deleted volume-expand-969\nE0707 08:15:02.587435       1 namespace_controller.go:162] deletion of namespace container-lifecycle-hook-277 failed: unexpected items still remain in namespace: container-lifecycle-hook-277 for gvr: /v1, Resource=pods\nE0707 08:15:03.840339       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nE0707 08:15:05.853393       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nE0707 08:15:07.388272       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nI0707 08:15:08.908429       1 event.go:291] \"Event occurred\" object=\"volume-6332/csi-hostpathzg8ws\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-6332\\\" or manually created by system administrator\"\nI0707 08:15:08.908470       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:15:09.019745       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0707 08:15:09.099113       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nI0707 08:15:09.202713       1 namespace_controller.go:185] Namespace has been deleted disruption-5525\nE0707 08:15:09.338553       1 tokens_controller.go:261] error synchronizing serviceaccount container-runtime-9615/default: secrets \"default-token-ngs2b\" is forbidden: unable to create new content in namespace container-runtime-9615 because it is being terminated\nI0707 08:15:09.986424       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-277\nE0707 08:15:09.999009       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nE0707 08:15:10.265971       1 tokens_controller.go:261] error synchronizing serviceaccount projected-1252/default: secrets \"default-token-wtp4x\" is forbidden: unable to create new content in namespace projected-1252 because it is being terminated\nI0707 08:15:11.055229       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") from node \"kind-worker\" \nI0707 08:15:11.151099       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") from node \"kind-worker\" \nI0707 08:15:11.151608       1 event.go:291] \"Event occurred\" object=\"volume-6332/hostpath-injector\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\\\" \"\nE0707 08:15:11.718576       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nE0707 08:15:13.277786       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nE0707 08:15:14.360992       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nI0707 08:15:15.164097       1 namespace_controller.go:185] Namespace has been deleted container-runtime-9615\nE0707 08:15:16.447403       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nI0707 08:15:16.736861       1 namespace_controller.go:185] Namespace has been deleted projected-1252\nI0707 08:15:17.363435       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-2 in StatefulSet ss successful\"\nE0707 08:15:17.656274       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:15:18.534064       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nI0707 08:15:22.232943       1 event.go:291] \"Event occurred\" object=\"default/httpd-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set httpd-deployment-86bff9b6d7 to 1\"\nI0707 08:15:22.406033       1 event.go:291] \"Event occurred\" object=\"default/httpd-deployment-86bff9b6d7\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: httpd-deployment-86bff9b6d7-brcfw\"\nE0707 08:15:23.104938       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nI0707 08:15:23.911356       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nE0707 08:15:24.984535       1 tokens_controller.go:261] error synchronizing serviceaccount volume-3560/default: secrets \"default-token-gx6lb\" is forbidden: unable to create new content in namespace volume-3560 because it is being terminated\nI0707 08:15:28.317585       1 event.go:291] \"Event occurred\" object=\"services-2819/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-zw5ns\"\nI0707 08:15:28.425910       1 event.go:291] \"Event occurred\" object=\"services-2819/externalname-service\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: externalname-service-g9p65\"\nE0707 08:15:28.800064       1 tokens_controller.go:261] error synchronizing serviceaccount volumemode-2027/default: secrets \"default-token-kn7zt\" is forbidden: unable to create new content in namespace volumemode-2027 because it is being terminated\nE0707 08:15:30.425465       1 namespace_controller.go:162] deletion of namespace disruption-5577 failed: unexpected items still remain in namespace: disruption-5577 for gvr: /v1, Resource=pods\nE0707 08:15:30.980247       1 tokens_controller.go:261] error synchronizing serviceaccount port-forwarding-9667/default: secrets \"default-token-whhqp\" is forbidden: unable to create new content in namespace port-forwarding-9667 because it is being terminated\nE0707 08:15:31.033060       1 stateful_set.go:392] error syncing StatefulSet statefulset-7806/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0707 08:15:31.035596       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0707 08:15:31.047427       1 stateful_set.go:392] error syncing StatefulSet statefulset-7806/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0707 08:15:31.050950       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0707 08:15:31.148042       1 stateful_set.go:392] error syncing StatefulSet statefulset-7806/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0707 08:15:31.149561       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0707 08:15:31.189232       1 stateful_set.go:392] error syncing StatefulSet statefulset-7806/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0707 08:15:31.219701       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0707 08:15:31.273858       1 stateful_set.go:392] error syncing StatefulSet statefulset-7806/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0707 08:15:31.274533       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0707 08:15:31.310718       1 stateful_set.go:392] error syncing StatefulSet statefulset-7806/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0707 08:15:31.310817       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0707 08:15:31.411712       1 stateful_set.go:392] error syncing StatefulSet statefulset-7806/ss, requeuing: The POST operation against Pod could not be completed at this time, please try again.\nI0707 08:15:31.412239       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Warning\" reason=\"FailedCreate\" message=\"create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.\"\nE0707 08:15:32.671672       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:15:32.722119       1 namespace_controller.go:185] Namespace has been deleted volume-3560\nE0707 08:15:33.145893       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-8877/default: secrets \"default-token-mtsr5\" is forbidden: unable to create new content in namespace kubectl-8877 because it is being terminated\nI0707 08:15:33.854691       1 event.go:291] \"Event occurred\" object=\"statefulset-7806/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0707 08:15:36.644834       1 namespace_controller.go:185] Namespace has been deleted configmap-6992\nI0707 08:15:37.134960       1 namespace_controller.go:185] Namespace has been deleted volumemode-2027\nI0707 08:15:38.913512       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:15:39.039271       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-1 in StatefulSet ss successful\"\nI0707 08:15:39.164669       1 namespace_controller.go:185] Namespace has been deleted kubectl-8877\nI0707 08:15:40.786113       1 namespace_controller.go:185] Namespace has been deleted volume-3756\nI0707 08:15:41.044409       1 namespace_controller.go:185] Namespace has been deleted multi-az-473\nE0707 08:15:41.176987       1 tokens_controller.go:261] error synchronizing serviceaccount volume-5063/default: secrets \"default-token-c6tmh\" is forbidden: unable to create new content in namespace volume-5063 because it is being terminated\nI0707 08:15:43.775340       1 namespace_controller.go:185] Namespace has been deleted disruption-5577\nE0707 08:15:44.369771       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:15:45.915529       1 tokens_controller.go:261] error synchronizing serviceaccount watch-5401/default: secrets \"default-token-ndbfh\" is forbidden: unable to create new content in namespace watch-5401 because it is being terminated\nI0707 08:15:47.630677       1 namespace_controller.go:185] Namespace has been deleted volume-5063\nI0707 08:15:47.799639       1 namespace_controller.go:185] Namespace has been deleted sysctl-9884\nE0707 08:15:48.259182       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-6290/pvc-cqm5f: storageclass.storage.k8s.io \"volume-6290\" not found\nI0707 08:15:48.263732       1 event.go:291] \"Event occurred\" object=\"volume-6290/pvc-cqm5f\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-6290\\\" not found\"\nI0707 08:15:50.136660       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0707 08:15:50.174808       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/test\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint statefulset-4869/test: Operation cannot be fulfilled on endpoints \\\"test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0707 08:15:51.147667       1 namespace_controller.go:185] Namespace has been deleted volumemode-1522\nI0707 08:15:51.620243       1 event.go:291] \"Event occurred\" object=\"services-750/affinity-clusterip\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-xtlqg\"\nI0707 08:15:51.672277       1 event.go:291] \"Event occurred\" object=\"services-750/affinity-clusterip\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-wg24f\"\nI0707 08:15:51.673279       1 event.go:291] \"Event occurred\" object=\"services-750/affinity-clusterip\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: affinity-clusterip-cfmh7\"\nI0707 08:15:52.456512       1 namespace_controller.go:185] Namespace has been deleted provisioning-2390\nI0707 08:15:53.741918       1 reconciler.go:203] attacherDetacher.DetachVolume started for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") on node \"kind-worker\" \nI0707 08:15:53.785543       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") on node \"kind-worker\" \nI0707 08:15:53.812056       1 namespace_controller.go:185] Namespace has been deleted provisioning-8905\nI0707 08:15:53.862733       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") on node \"kind-worker\" \nI0707 08:15:53.927620       1 namespace_controller.go:185] Namespace has been deleted watch-5401\nI0707 08:15:54.174897       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9806/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0707 08:15:54.212884       1 stateful_set.go:419] StatefulSet has been deleted statefulset-7806/ss\nI0707 08:15:54.290342       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9806/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0707 08:15:54.350940       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:15:54.446052       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5556/pvc-7wg2x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5556\\\" or manually created by system administrator\"\nE0707 08:15:55.313997       1 tokens_controller.go:261] error synchronizing serviceaccount emptydir-5732/default: secrets \"default-token-lpr9k\" is forbidden: unable to create new content in namespace emptydir-5732 because it is being terminated\nE0707 08:15:57.467575       1 tokens_controller.go:261] error synchronizing serviceaccount pods-6238/default: secrets \"default-token-fvndh\" is forbidden: unable to create new content in namespace pods-6238 because it is being terminated\nI0707 08:16:01.137871       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") from node \"kind-worker\" \nI0707 08:16:01.271246       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") from node \"kind-worker\" \nI0707 08:16:01.271786       1 event.go:291] \"Event occurred\" object=\"volume-6332/hostpath-client\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\\\" \"\nI0707 08:16:02.103435       1 namespace_controller.go:185] Namespace has been deleted emptydir-5732\nI0707 08:16:02.601161       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-6138\nI0707 08:16:05.743220       1 event.go:291] \"Event occurred\" object=\"ephemeral-744/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0707 08:16:06.178413       1 event.go:291] \"Event occurred\" object=\"ephemeral-744/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:16:06.530805       1 event.go:291] \"Event occurred\" object=\"ephemeral-744/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:16:06.548531       1 event.go:291] \"Event occurred\" object=\"cronjob-8383/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job forbid-1594109760\"\nI0707 08:16:06.677628       1 event.go:291] \"Event occurred\" object=\"cronjob-8383/forbid-1594109760\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: forbid-1594109760-krqx2\"\nI0707 08:16:06.695274       1 event.go:291] \"Event occurred\" object=\"ephemeral-744/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0707 08:16:06.765254       1 cronjob_controller.go:190] Unable to update status for cronjob-8383/forbid (rv = 14020): Operation cannot be fulfilled on cronjobs.batch \"forbid\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:16:06.964841       1 event.go:291] \"Event occurred\" object=\"cronjob-9282/replace\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job replace-1594109760\"\nI0707 08:16:07.023350       1 event.go:291] \"Event occurred\" object=\"ephemeral-744/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0707 08:16:07.024111       1 event.go:291] \"Event occurred\" object=\"cronjob-9282/replace-1594109760\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: replace-1594109760-5hvqz\"\nI0707 08:16:07.121791       1 namespace_controller.go:185] Namespace has been deleted port-forwarding-9667\nI0707 08:16:07.158792       1 cronjob_controller.go:190] Unable to update status for cronjob-9282/replace (rv = 14176): Operation cannot be fulfilled on cronjobs.batch \"replace\": the object has been modified; please apply your changes to the latest version and try again\nE0707 08:16:08.630485       1 tokens_controller.go:261] error synchronizing serviceaccount nettest-8563/default: secrets \"default-token-cr2z2\" is forbidden: unable to create new content in namespace nettest-8563 because it is being terminated\nI0707 08:16:08.927533       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:16:08.928064       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5556/pvc-7wg2x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5556\\\" or manually created by system administrator\"\nI0707 08:16:10.349104       1 event.go:291] \"Event occurred\" object=\"statefulset-4869/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod ss-0 in StatefulSet ss successful\"\nI0707 08:16:10.916166       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-2 in StatefulSet ss successful\"\nE0707 08:16:11.844128       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:16:13.666936       1 namespace_controller.go:185] Namespace has been deleted volumemode-5115\nI0707 08:16:15.086312       1 namespace_controller.go:185] Namespace has been deleted services-2819\nE0707 08:16:15.375759       1 tokens_controller.go:261] error synchronizing serviceaccount volumemode-5937/default: secrets \"default-token-7458q\" is forbidden: unable to create new content in namespace volumemode-5937 because it is being terminated\nE0707 08:16:15.928123       1 tokens_controller.go:261] error synchronizing serviceaccount container-lifecycle-hook-9444/default: secrets \"default-token-b8sd2\" is forbidden: unable to create new content in namespace container-lifecycle-hook-9444 because it is being terminated\nE0707 08:16:17.008826       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-5269/default: secrets \"default-token-jqfdb\" is forbidden: unable to create new content in namespace provisioning-5269 because it is being terminated\nI0707 08:16:17.242270       1 event.go:291] \"Event occurred\" object=\"cronjob-8383/forbid\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"MissingJob\" message=\"Active job went missing: forbid-1594109760\"\nE0707 08:16:17.785086       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:16:17.952125       1 namespace_controller.go:185] Namespace has been deleted configmap-2412\nE0707 08:16:18.785051       1 tokens_controller.go:261] error synchronizing serviceaccount volumemode-5424/default: secrets \"default-token-kgnf9\" is forbidden: unable to create new content in namespace volumemode-5424 because it is being terminated\nI0707 08:16:19.755009       1 namespace_controller.go:185] Namespace has been deleted services-2819\nE0707 08:16:20.199135       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-535/default: secrets \"default-token-dwxtz\" is forbidden: unable to create new content in namespace kubectl-535 because it is being terminated\nI0707 08:16:20.818877       1 namespace_controller.go:185] Namespace has been deleted nettest-8563\nI0707 08:16:21.813620       1 event.go:291] \"Event occurred\" object=\"crd-webhook-8771/sample-crd-conversion-webhook-deployment\" kind=\"Deployment\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"ScalingReplicaSet\" message=\"Scaled up replica set sample-crd-conversion-webhook-deployment-7478868bd9 to 1\"\nE0707 08:16:22.285237       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:16:22.373551       1 event.go:291] \"Event occurred\" object=\"crd-webhook-8771/sample-crd-conversion-webhook-deployment-7478868bd9\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: sample-crd-conversion-webhook-deployment-7478868bd9-5c78n\"\nI0707 08:16:23.928041       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5556/pvc-7wg2x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5556\\\" or manually created by system administrator\"\nI0707 08:16:23.928302       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:16:24.942255       1 namespace_controller.go:185] Namespace has been deleted volumemode-5424\nI0707 08:16:25.034309       1 namespace_controller.go:185] Namespace has been deleted volume-99\nI0707 08:16:25.038892       1 namespace_controller.go:185] Namespace has been deleted volumemode-5937\nI0707 08:16:26.041720       1 namespace_controller.go:185] Namespace has been deleted kubectl-535\nI0707 08:16:26.295186       1 namespace_controller.go:185] Namespace has been deleted provisioning-5269\nE0707 08:16:27.451241       1 pv_controller.go:1432] error finding provisioning plugin for claim persistent-local-volumes-test-3841/pvc-jxxxr: no volume plugin matched name: kubernetes.io/no-provisioner\nI0707 08:16:27.451791       1 event.go:291] \"Event occurred\" object=\"persistent-local-volumes-test-3841/pvc-jxxxr\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"no volume plugin matched name: kubernetes.io/no-provisioner\"\nI0707 08:16:28.554767       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/test\" kind=\"Endpoints\" apiVersion=\"v1\" type=\"Warning\" reason=\"FailedToUpdateEndpoint\" message=\"Failed to update endpoint statefulset-4810/test: Operation cannot be fulfilled on endpoints \\\"test\\\": the object has been modified; please apply your changes to the latest version and try again\"\nI0707 08:16:28.573854       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-1 in StatefulSet ss successful\"\nE0707 08:16:31.302193       1 tokens_controller.go:261] error synchronizing serviceaccount statefulset-7806/default: secrets \"default-token-6tlzf\" is forbidden: unable to create new content in namespace statefulset-7806 because it is being terminated\nI0707 08:16:32.094040       1 namespace_controller.go:185] Namespace has been deleted container-runtime-2155\nI0707 08:16:34.191273       1 namespace_controller.go:185] Namespace has been deleted emptydir-8157\nI0707 08:16:36.494261       1 namespace_controller.go:185] Namespace has been deleted dns-2378\nI0707 08:16:36.495295       1 namespace_controller.go:185] Namespace has been deleted cronjob-8383\nI0707 08:16:36.749576       1 event.go:291] \"Event occurred\" object=\"provisioning-5520/nfsz47m6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-5520\\\" or manually created by system administrator\"\nI0707 08:16:37.228579       1 namespace_controller.go:185] Namespace has been deleted statefulset-7806\nI0707 08:16:38.539313       1 namespace_controller.go:185] Namespace has been deleted projected-3255\nI0707 08:16:38.539475       1 namespace_controller.go:185] Namespace has been deleted pods-6238\nI0707 08:16:38.947279       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5556/pvc-7wg2x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5556\\\" or manually created by system administrator\"\nI0707 08:16:38.947524       1 event.go:291] \"Event occurred\" object=\"provisioning-5520/nfsz47m6\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"example.com/nfs-provisioning-5520\\\" or manually created by system administrator\"\nI0707 08:16:38.947603       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nE0707 08:16:41.671990       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-3617/pvc-tzbdg: storageclass.storage.k8s.io \"volume-3617\" not found\nI0707 08:16:41.672068       1 event.go:291] \"Event occurred\" object=\"volume-3617/pvc-tzbdg\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-3617\\\" not found\"\nI0707 08:16:44.323777       1 reconciler.go:203] attacherDetacher.DetachVolume started for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") on node \"kind-worker\" \nI0707 08:16:44.370053       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") on node \"kind-worker\" \nI0707 08:16:44.417091       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-7bf33c7d-168b-44d7-9a12-794fbb0efecd\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-6332^f8cdc680-c029-11ea-8f2a-76afcb1630c0\") on node \"kind-worker\" \nI0707 08:16:44.758902       1 event.go:291] \"Event occurred\" object=\"pvc-protection-2503/pvc-protectionqz4pj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"WaitForFirstConsumer\" message=\"waiting for first consumer to be created before binding\"\nI0707 08:16:44.928715       1 event.go:291] \"Event occurred\" object=\"pvc-protection-2503/pvc-protectionqz4pj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:16:45.020530       1 event.go:291] \"Event occurred\" object=\"provisioning-9859/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0707 08:16:45.446348       1 event.go:291] \"Event occurred\" object=\"provisioning-9859/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:16:45.550329       1 event.go:291] \"Event occurred\" object=\"provisioning-9859/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:16:45.766205       1 event.go:291] \"Event occurred\" object=\"provisioning-9859/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0707 08:16:45.871020       1 event.go:291] \"Event occurred\" object=\"provisioning-9859/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nE0707 08:16:45.876611       1 namespace_controller.go:162] deletion of namespace container-lifecycle-hook-9444 failed: unexpected items still remain in namespace: container-lifecycle-hook-9444 for gvr: /v1, Resource=pods\nI0707 08:16:45.975931       1 event.go:291] \"Event occurred\" object=\"provisioning-3249/csi-hostpathcr4n2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-3249\\\" or manually created by system administrator\"\nE0707 08:16:47.111314       1 namespace_controller.go:162] deletion of namespace container-lifecycle-hook-9444 failed: unexpected items still remain in namespace: container-lifecycle-hook-9444 for gvr: /v1, Resource=pods\nI0707 08:16:47.949457       1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for e2e-test-crd-webhook-8228-crds.stable.example.com\nI0707 08:16:47.949732       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0707 08:16:48.050140       1 shared_informer.go:247] Caches are synced for resource quota \nI0707 08:16:49.211249       1 event.go:291] \"Event occurred\" object=\"statefulset-4810/ss\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"delete Pod ss-0 in StatefulSet ss successful\"\nI0707 08:16:49.268875       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:16:49.269620       1 shared_informer.go:247] Caches are synced for garbage collector \nE0707 08:16:49.729369       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:16:50.052164       1 namespace_controller.go:162] deletion of namespace container-lifecycle-hook-9444 failed: unexpected items still remain in namespace: container-lifecycle-hook-9444 for gvr: /v1, Resource=pods\nI0707 08:16:53.952722       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:16:53.952793       1 event.go:291] \"Event occurred\" object=\"provisioning-3249/csi-hostpathcr4n2\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-3249\\\" or manually created by system administrator\"\nI0707 08:16:53.952832       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5556/pvc-7wg2x\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-5556\\\" or manually created by system administrator\"\nI0707 08:16:53.952872       1 event.go:291] \"Event occurred\" object=\"pvc-protection-2503/pvc-protectionqz4pj\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"rancher.io/local-path\\\" or manually created by system administrator\"\nI0707 08:16:55.723708       1 namespace_controller.go:185] Namespace has been deleted container-runtime-7870\nI0707 08:16:56.589241       1 namespace_controller.go:185] Namespace has been deleted emptydir-9494\nI0707 08:16:56.696935       1 namespace_controller.go:185] Namespace has been deleted emptydir-9070\nI0707 08:16:56.787164       1 namespace_controller.go:185] Namespace has been deleted container-lifecycle-hook-9444\nI0707 08:16:57.543782       1 namespace_controller.go:185] Namespace has been deleted configmap-4571\nI0707 08:16:58.469603       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-934/csi-hostpath-attacher\nI0707 08:16:58.821898       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-934/csi-hostpathplugin\nI0707 08:16:59.473552       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-934/csi-hostpath-provisioner\nI0707 08:17:00.199779       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-934/csi-hostpath-resizer\nI0707 08:17:00.539785       1 namespace_controller.go:185] Namespace has been deleted ephemeral-9720\nI0707 08:17:01.252901       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-934/csi-hostpath-snapshotter\nI0707 08:17:04.458427       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume \"pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5556^4\") from node \"kind-worker\" \nI0707 08:17:05.492010       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5556^4\") from node \"kind-worker\" \nI0707 08:17:05.492544       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5556/pvc-volume-tester-7mzq5\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0\\\" \"\nI0707 08:17:06.050202       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4418/csi-mockplugin-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-attacher-0 in StatefulSet csi-mockplugin-attacher successful\"\nI0707 08:17:06.070805       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-4418/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0707 08:17:06.214347       1 stateful_set.go:419] StatefulSet has been deleted volume-9115/csi-hostpath-attacher\nI0707 08:17:06.347131       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1668/pvc-spzcx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-1668\\\" or manually created by system administrator\"\nI0707 08:17:06.757262       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3008\nI0707 08:17:07.008423       1 stateful_set.go:419] StatefulSet has been deleted volume-9115/csi-hostpathplugin\nI0707 08:17:07.314177       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume \"pvc-ea8d4cfe-2eb7-46d0-816f-196cfe6ace13\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3249^3e3f0ae2-c02a-11ea-9867-46ecd500b0af\") from node \"kind-worker\" \nI0707 08:17:07.390041       1 namespace_controller.go:185] Namespace has been deleted volume-6332\nE0707 08:17:07.446031       1 tokens_controller.go:261] error synchronizing serviceaccount ephemeral-934/default: secrets \"default-token-crxxm\" is forbidden: unable to create new content in namespace ephemeral-934 because it is being terminated\nI0707 08:17:07.688090       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-ea8d4cfe-2eb7-46d0-816f-196cfe6ace13\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3249^3e3f0ae2-c02a-11ea-9867-46ecd500b0af\") from node \"kind-worker\" \nI0707 08:17:07.688868       1 event.go:291] \"Event occurred\" object=\"provisioning-3249/pod-subpath-test-dynamicpv-zx6l\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-ea8d4cfe-2eb7-46d0-816f-196cfe6ace13\\\" \"\nI0707 08:17:07.858793       1 stateful_set.go:419] StatefulSet has been deleted volume-9115/csi-hostpath-provisioner\nE0707 08:17:07.963749       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:17:08.539036       1 event.go:291] \"Event occurred\" object=\"cronjob-9282/replace\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulDelete\" message=\"Deleted job replace-1594109760\"\nI0707 08:17:08.863633       1 event.go:291] \"Event occurred\" object=\"cronjob-9282/replace\" kind=\"CronJob\" apiVersion=\"batch/v1beta1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created job replace-1594109820\"\nI0707 08:17:08.960285       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1668/pvc-spzcx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-1668\\\" or manually created by system administrator\"\nI0707 08:17:08.960337       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nI0707 08:17:08.984651       1 stateful_set.go:419] StatefulSet has been deleted volume-9115/csi-hostpath-resizer\nI0707 08:17:08.989950       1 cronjob_controller.go:190] Unable to update status for cronjob-9282/replace (rv = 14821): Operation cannot be fulfilled on cronjobs.batch \"replace\": the object has been modified; please apply your changes to the latest version and try again\nI0707 08:17:08.992226       1 event.go:291] \"Event occurred\" object=\"cronjob-9282/replace-1594109820\" kind=\"Job\" apiVersion=\"batch/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: replace-1594109820-qzldj\"\nI0707 08:17:09.702786       1 stateful_set.go:419] StatefulSet has been deleted volume-9115/csi-hostpath-snapshotter\nE0707 08:17:09.921789       1 tokens_controller.go:261] error synchronizing serviceaccount multi-az-1969/default: secrets \"default-token-twtmb\" is forbidden: unable to create new content in namespace multi-az-1969 because it is being terminated\nE0707 08:17:11.974916       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:17:13.628287       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5731/csi-mockplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-0 in StatefulSet csi-mockplugin successful\"\nI0707 08:17:13.989067       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-5731/csi-mockplugin-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-mockplugin-resizer-0 in StatefulSet csi-mockplugin-resizer successful\"\nI0707 08:17:14.105154       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9793/pvc-dcqj9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9793\\\" or manually created by system administrator\"\nE0707 08:17:15.272507       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:17:16.774053       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:17:17.293606       1 namespace_controller.go:185] Namespace has been deleted persistent-local-volumes-test-3841\nI0707 08:17:18.409821       1 shared_informer.go:240] Waiting for caches to sync for resource quota\nI0707 08:17:18.410153       1 shared_informer.go:247] Caches are synced for resource quota \nE0707 08:17:18.829937       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:17:19.593784       1 tokens_controller.go:261] error synchronizing serviceaccount services-750/default: secrets \"default-token-zvl8w\" is forbidden: unable to create new content in namespace services-750 because it is being terminated\nI0707 08:17:19.676125       1 namespace_controller.go:185] Namespace has been deleted multi-az-1969\nI0707 08:17:19.686768       1 shared_informer.go:240] Waiting for caches to sync for garbage collector\nI0707 08:17:19.686820       1 shared_informer.go:247] Caches are synced for garbage collector \nE0707 08:17:19.784602       1 tokens_controller.go:261] error synchronizing serviceaccount volume-6290/default: secrets \"default-token-x88xg\" is forbidden: unable to create new content in namespace volume-6290 because it is being terminated\nI0707 08:17:19.920166       1 namespace_controller.go:185] Namespace has been deleted projected-8551\nE0707 08:17:20.205905       1 tokens_controller.go:261] error synchronizing serviceaccount pv-6117/default: secrets \"default-token-rdw9d\" is forbidden: unable to create new content in namespace pv-6117 because it is being terminated\nI0707 08:17:20.538840       1 namespace_controller.go:185] Namespace has been deleted downward-api-7556\nI0707 08:17:20.984918       1 stateful_set.go:419] StatefulSet has been deleted statefulset-4810/ss\nE0707 08:17:21.236055       1 tokens_controller.go:261] error synchronizing serviceaccount watch-5432/default: secrets \"default-token-n6zsl\" is forbidden: unable to create new content in namespace watch-5432 because it is being terminated\nE0707 08:17:22.205425       1 tokens_controller.go:261] error synchronizing serviceaccount volume-9115/default: secrets \"default-token-n9qjd\" is forbidden: unable to create new content in namespace volume-9115 because it is being terminated\nI0707 08:17:22.798205       1 namespace_controller.go:185] Namespace has been deleted kubectl-9577\nI0707 08:17:23.959773       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9793/pvc-dcqj9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-9793\\\" or manually created by system administrator\"\nI0707 08:17:23.960187       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1668/pvc-spzcx\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-1668\\\" or manually created by system administrator\"\nI0707 08:17:23.960236       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-htdhs\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-mock-csi-mock-volumes-2772\\\" or manually created by system administrator\"\nE0707 08:17:25.227354       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:17:25.607246       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume \"pvc-724a099e-2878-4162-bb21-da597cfa913d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1668^4\") from node \"kind-worker\" \nI0707 08:17:25.990658       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-724a099e-2878-4162-bb21-da597cfa913d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-1668^4\") from node \"kind-worker\" \nI0707 08:17:25.991513       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-1668/pvc-volume-tester-6wx5c\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-724a099e-2878-4162-bb21-da597cfa913d\\\" \"\nI0707 08:17:26.577325       1 namespace_controller.go:185] Namespace has been deleted pv-6117\nI0707 08:17:27.886945       1 namespace_controller.go:185] Namespace has been deleted watch-5432\nI0707 08:17:28.323205       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume \"pvc-90cd9688-44c2-4d47-bfa2-720995a0844d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2772^4\") from node \"kind-worker2\" \nI0707 08:17:28.785044       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-90cd9688-44c2-4d47-bfa2-720995a0844d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2772^4\") from node \"kind-worker2\" \nI0707 08:17:28.785599       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-2772/pvc-volume-tester-px4ct\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-90cd9688-44c2-4d47-bfa2-720995a0844d\\\" \"\nI0707 08:17:29.082565       1 namespace_controller.go:185] Namespace has been deleted crd-webhook-8771\nE0707 08:17:29.442338       1 tokens_controller.go:261] error synchronizing serviceaccount projected-586/default: secrets \"default-token-p8p58\" is forbidden: unable to create new content in namespace projected-586 because it is being terminated\nI0707 08:17:29.674235       1 namespace_controller.go:185] Namespace has been deleted volume-6290\nE0707 08:17:29.961119       1 disruption.go:505] Error syncing PodDisruptionBudget disruption-8207/foo, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy \"foo\": the object has been modified; please apply your changes to the latest version and try again\nE0707 08:17:30.466690       1 pv_controller.go:1432] error finding provisioning plugin for claim persistent-local-volumes-test-6769/pvc-xl2g5: no volume plugin matched name: kubernetes.io/no-provisioner\nI0707 08:17:30.467061       1 event.go:291] \"Event occurred\" object=\"persistent-local-volumes-test-6769/pvc-xl2g5\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"no volume plugin matched name: kubernetes.io/no-provisioner\"\nI0707 08:17:30.816026       1 namespace_controller.go:185] Namespace has been deleted services-750\nI0707 08:17:32.218552       1 event.go:291] \"Event occurred\" object=\"kubectl-9461/agnhost-primary\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: agnhost-primary-9c4ld\"\nI0707 08:17:34.080427       1 event.go:291] \"Event occurred\" object=\"volume-expand-1470/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nE0707 08:17:34.669899       1 tokens_controller.go:261] error synchronizing serviceaccount aggregator-654/default: secrets \"default-token-r4k7h\" is forbidden: unable to create new content in namespace aggregator-654 because it is being terminated\nI0707 08:17:34.677185       1 event.go:291] \"Event occurred\" object=\"volume-expand-1470/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:17:34.724279       1 event.go:291] \"Event occurred\" object=\"volume-expand-1470/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:17:35.060570       1 event.go:291] \"Event occurred\" object=\"volume-expand-1470/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0707 08:17:35.339003       1 event.go:291] \"Event occurred\" object=\"volume-expand-1470/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nE0707 08:17:35.339327       1 tokens_controller.go:261] error synchronizing serviceaccount kubelet-test-2689/default: secrets \"default-token-xsfzc\" is forbidden: unable to create new content in namespace kubelet-test-2689 because it is being terminated\nE0707 08:17:35.530274       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:17:35.563200       1 event.go:291] \"Event occurred\" object=\"volume-expand-5652/csi-hostpathxtf44\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5652\\\" or manually created by system administrator\"\nI0707 08:17:38.322006       1 namespace_controller.go:185] Namespace has been deleted projected-586\nI0707 08:17:38.750736       1 event.go:291] \"Event occurred\" object=\"provisioning-8662/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0707 08:17:38.789179       1 event.go:291] \"Event occurred\" object=\"provisioning-8662/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:17:38.988536       1 event.go:291] \"Event occurred\" object=\"volume-expand-5652/csi-hostpathxtf44\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5652\\\" or manually created by system administrator\"\nI0707 08:17:39.642097       1 event.go:291] \"Event occurred\" object=\"provisioning-8662/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:17:40.103266       1 event.go:291] \"Event occurred\" object=\"provisioning-8662/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0707 08:17:40.415445       1 event.go:291] \"Event occurred\" object=\"provisioning-8662/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0707 08:17:40.492757       1 event.go:291] \"Event occurred\" object=\"provisioning-9150/csi-hostpath26vpz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9150\\\" or manually created by system administrator\"\nI0707 08:17:40.493181       1 event.go:291] \"Event occurred\" object=\"provisioning-9150/csi-hostpath26vpz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9150\\\" or manually created by system administrator\"\nI0707 08:17:41.533630       1 namespace_controller.go:185] Namespace has been deleted ephemeral-934\nE0707 08:17:42.200217       1 tokens_controller.go:261] error synchronizing serviceaccount projected-1881/default: secrets \"default-token-sfzl9\" is forbidden: unable to create new content in namespace projected-1881 because it is being terminated\nE0707 08:17:42.581592       1 tokens_controller.go:261] error synchronizing serviceaccount secret-namespace-4362/default: secrets \"default-token-vtkt6\" is forbidden: unable to create new content in namespace secret-namespace-4362 because it is being terminated\nI0707 08:17:45.439008       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-4497/csi-hostpath-attacher\nI0707 08:17:45.491907       1 namespace_controller.go:185] Namespace has been deleted kubelet-test-2689\nI0707 08:17:45.875324       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-4497/csi-hostpathplugin\nI0707 08:17:46.239897       1 namespace_controller.go:185] Namespace has been deleted ephemeral-8952\nE0707 08:17:46.266923       1 pv_controller.go:1432] error finding provisioning plugin for claim volume-316/pvc-65stn: storageclass.storage.k8s.io \"volume-316\" not found\nI0707 08:17:46.267307       1 event.go:291] \"Event occurred\" object=\"volume-316/pvc-65stn\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ProvisioningFailed\" message=\"storageclass.storage.k8s.io \\\"volume-316\\\" not found\"\nI0707 08:17:46.498704       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-4497/csi-hostpath-provisioner\nI0707 08:17:46.545191       1 namespace_controller.go:185] Namespace has been deleted configmap-5014\nE0707 08:17:47.030147       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-6259/default: secrets \"default-token-d7qqd\" is forbidden: unable to create new content in namespace kubectl-6259 because it is being terminated\nI0707 08:17:47.136854       1 namespace_controller.go:185] Namespace has been deleted aggregator-654\nI0707 08:17:47.298659       1 namespace_controller.go:185] Namespace has been deleted pvc-protection-2503\nI0707 08:17:47.480492       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-4497/csi-hostpath-resizer\nE0707 08:17:47.590264       1 namespace_controller.go:162] deletion of namespace cronjob-9282 failed: unexpected items still remain in namespace: cronjob-9282 for gvr: /v1, Resource=pods\nE0707 08:17:48.239222       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:17:48.389775       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-4497/csi-hostpath-snapshotter\nI0707 08:17:48.803155       1 namespace_controller.go:185] Namespace has been deleted volume-9115\nE0707 08:17:48.933889       1 tokens_controller.go:261] error synchronizing serviceaccount ephemeral-4351/default: secrets \"default-token-d86w2\" is forbidden: unable to create new content in namespace ephemeral-4351 because it is being terminated\nI0707 08:17:49.140067       1 namespace_controller.go:185] Namespace has been deleted secret-namespace-4362\nI0707 08:17:49.164212       1 event.go:291] \"Event occurred\" object=\"ephemeral-7078/csi-hostpath-attacher\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful\"\nI0707 08:17:49.377645       1 event.go:291] \"Event occurred\" object=\"ephemeral-7078/csi-hostpathplugin\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful\"\nI0707 08:17:49.493491       1 event.go:291] \"Event occurred\" object=\"ephemeral-7078/csi-hostpath-provisioner\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful\"\nI0707 08:17:49.647095       1 namespace_controller.go:185] Namespace has been deleted runtimeclass-3036\nI0707 08:17:49.737378       1 event.go:291] \"Event occurred\" object=\"ephemeral-7078/csi-hostpath-resizer\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful\"\nI0707 08:17:50.009755       1 event.go:291] \"Event occurred\" object=\"ephemeral-7078/csi-hostpath-snapshotter\" kind=\"StatefulSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"create Pod csi-hostpath-snapshotter-0 in StatefulSet csi-hostpath-snapshotter successful\"\nI0707 08:17:50.087075       1 namespace_controller.go:185] Namespace has been deleted projected-1881\nE0707 08:17:51.031307       1 namespace_controller.go:162] deletion of namespace cronjob-9282 failed: unexpected items still remain in namespace: cronjob-9282 for gvr: /v1, Resource=pods\nE0707 08:17:51.623024       1 tokens_controller.go:261] error synchronizing serviceaccount provisioning-5520/default: secrets \"default-token-qf49s\" is forbidden: unable to create new content in namespace provisioning-5520 because it is being terminated\nI0707 08:17:52.039650       1 event.go:291] \"Event occurred\" object=\"proxy-987/proxy-service-pg8hg\" kind=\"ReplicationController\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: proxy-service-pg8hg-swntb\"\nE0707 08:17:52.224125       1 namespace_controller.go:162] deletion of namespace cronjob-9282 failed: unexpected items still remain in namespace: cronjob-9282 for gvr: /v1, Resource=pods\nI0707 08:17:53.870427       1 namespace_controller.go:185] Namespace has been deleted podtemplate-9506\nI0707 08:17:53.998259       1 event.go:291] \"Event occurred\" object=\"volume-expand-5652/csi-hostpathxtf44\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-volume-expand-5652\\\" or manually created by system administrator\"\nI0707 08:17:54.224959       1 event.go:291] \"Event occurred\" object=\"provisioning-9150/csi-hostpath26vpz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9150\\\" or manually created by system administrator\"\nE0707 08:17:54.306449       1 namespace_controller.go:162] deletion of namespace cronjob-9282 failed: unexpected items still remain in namespace: cronjob-9282 for gvr: /v1, Resource=pods\nI0707 08:17:55.130008       1 reconciler.go:203] attacherDetacher.DetachVolume started for volume \"pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5556^4\") on node \"kind-worker\" \nI0707 08:17:55.184083       1 namespace_controller.go:185] Namespace has been deleted ssh-1218\nI0707 08:17:55.210588       1 reconciler.go:203] attacherDetacher.DetachVolume started for volume \"pvc-ea8d4cfe-2eb7-46d0-816f-196cfe6ace13\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3249^3e3f0ae2-c02a-11ea-9867-46ecd500b0af\") on node \"kind-worker\" \nI0707 08:17:55.276192       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5556^4\") on node \"kind-worker\" \nI0707 08:17:55.380605       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-ea8d4cfe-2eb7-46d0-816f-196cfe6ace13\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3249^3e3f0ae2-c02a-11ea-9867-46ecd500b0af\") on node \"kind-worker\" \nE0707 08:17:55.393389       1 tokens_controller.go:261] error synchronizing serviceaccount ephemeral-4497/default: secrets \"default-token-pgqds\" is forbidden: unable to create new content in namespace ephemeral-4497 because it is being terminated\nI0707 08:17:55.514490       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-ea8d4cfe-2eb7-46d0-816f-196cfe6ace13\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-3249^3e3f0ae2-c02a-11ea-9867-46ecd500b0af\") on node \"kind-worker\" \nI0707 08:17:55.876094       1 namespace_controller.go:185] Namespace has been deleted kubectl-6259\nI0707 08:17:55.926709       1 namespace_controller.go:185] Namespace has been deleted ephemeral-4351\nI0707 08:17:55.940789       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-8ce8424a-a73c-4464-ab02-e987d2a067c0\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-5556^4\") on node \"kind-worker\" \nI0707 08:17:56.328398       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-744/csi-hostpath-attacher\nI0707 08:17:56.996375       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-744/csi-hostpathplugin\nE0707 08:17:57.393398       1 namespace_controller.go:162] deletion of namespace cronjob-9282 failed: unexpected items still remain in namespace: cronjob-9282 for gvr: /v1, Resource=pods\nI0707 08:17:57.580384       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-744/csi-hostpath-provisioner\nI0707 08:17:57.802739       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-744/csi-hostpath-resizer\nE0707 08:17:58.077790       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:17:58.090127       1 stateful_set.go:419] StatefulSet has been deleted ephemeral-744/csi-hostpath-snapshotter\nE0707 08:17:59.560850       1 namespace_controller.go:162] deletion of namespace cronjob-9282 failed: unexpected items still remain in namespace: cronjob-9282 for gvr: /v1, Resource=pods\nI0707 08:18:00.982740       1 namespace_controller.go:185] Namespace has been deleted provisioning-5520\nE0707 08:18:01.588157       1 tokens_controller.go:261] error synchronizing serviceaccount kubectl-9461/default: secrets \"default-token-xwvcg\" is forbidden: unable to create new content in namespace kubectl-9461 because it is being terminated\nE0707 08:18:03.696657       1 tokens_controller.go:261] error synchronizing serviceaccount csi-mock-volumes-5556/default: secrets \"default-token-6hqtj\" is forbidden: unable to create new content in namespace csi-mock-volumes-5556 because it is being terminated\nI0707 08:18:04.160798       1 expand_controller.go:287] Ignoring the PVC \"csi-mock-volumes-9793/pvc-dcqj9\" (uid: \"82accf4f-0a88-4285-ae1e-f704b0dad469\") : didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\nI0707 08:18:04.161595       1 event.go:291] \"Event occurred\" object=\"csi-mock-volumes-9793/pvc-dcqj9\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Warning\" reason=\"ExternalExpanding\" message=\"Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.\"\nE0707 08:18:04.385032       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nE0707 08:18:05.487376       1 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource\nI0707 08:18:06.846039       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-9806/csi-mockplugin\nI0707 08:18:07.001903       1 stateful_set.go:419] StatefulSet has been deleted csi-mock-volumes-9806/csi-mockplugin-attacher\nI0707 08:18:07.255155       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume \"pvc-5ba75552-ec8c-49fd-9883-113f7bbf27e1\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5652^62577999-c02a-11ea-bbc6-2232c68dfd7b\") from node \"kind-worker2\" \nI0707 08:18:07.345738       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-5ba75552-ec8c-49fd-9883-113f7bbf27e1\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-volume-expand-5652^62577999-c02a-11ea-bbc6-2232c68dfd7b\") from node \"kind-worker2\" \nI0707 08:18:07.346313       1 event.go:291] \"Event occurred\" object=\"volume-expand-5652/pod-3890e074-1cb8-4b55-b5ab-7eebd22c43fb\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-5ba75552-ec8c-49fd-9883-113f7bbf27e1\\\" \"\nI0707 08:18:07.363700       1 namespace_controller.go:185] Namespace has been deleted cronjob-9282\nI0707 08:18:09.010141       1 event.go:291] \"Event occurred\" object=\"provisioning-9150/csi-hostpath26vpz\" kind=\"PersistentVolumeClaim\" apiVersion=\"v1\" type=\"Normal\" reason=\"ExternalProvisioning\" message=\"waiting for a volume to be created, either by external provisioner \\\"csi-hostpath-provisioning-9150\\\" or manually created by system administrator\"\nI0707 08:18:09.522145       1 namespace_controller.go:185] Namespace has been deleted statefulset-4810\nI0707 08:18:09.632543       1 namespace_controller.go:185] Namespace has been deleted csi-mock-volumes-5556\nI0707 08:18:11.807191       1 reconciler.go:275] attacherDetacher.AttachVolume started for volume \"pvc-02cf3446-3b8a-41dc-8a71-d83ad296ce91\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9150^654d7cd5-c02a-11ea-98bc-06e7320540d6\") from node \"kind-worker\" \nI0707 08:18:11.922800       1 operation_generator.go:361] AttachVolume.Attach succeeded for volume \"pvc-02cf3446-3b8a-41dc-8a71-d83ad296ce91\" (UniqueName: \"kubernetes.io/csi/csi-hostpath-provisioning-9150^654d7cd5-c02a-11ea-98bc-06e7320540d6\") from node \"kind-worker\" \nI0707 08:18:11.923275       1 event.go:291] \"Event occurred\" object=\"provisioning-9150/pod-subpath-test-dynamicpv-8z58\" kind=\"Pod\" apiVersion=\"v1\" type=\"Normal\" reason=\"SuccessfulAttachVolume\" message=\"AttachVolume.Attach succeeded for volume \\\"pvc-02cf3446-3b8a-41dc-8a71-d83ad296ce91\\\" \"\nI0707 08:18:16.473574       1 reconciler.go:203] attacherDetacher.DetachVolume started for volume \"pvc-90cd9688-44c2-4d47-bfa2-720995a0844d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2772^4\") on node \"kind-worker2\" \nI0707 08:18:16.506882       1 operation_generator.go:1400] Verified volume is safe to detach for volume \"pvc-90cd9688-44c2-4d47-bfa2-720995a0844d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2772^4\") on node \"kind-worker2\" \nI0707 08:18:16.897410       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume \"pvc-90cd9688-44c2-4d47-bfa2-720995a0844d\" (UniqueName: \"kubernetes.io/csi/csi-mock-csi-mock-volumes-2772^4\") on node \"kind-worker2\" \nE0707 08:18:16.900569       1 tokens_controller.go:261] error synchronizing serviceaccount csi-mock-volumes-9806/default: secrets \"default-token-kn59l\" is forbidden: unable to create new content in namespace csi-mock-volumes-9806 because it is being terminated\nI0707 08:18:17.199428       1 stateful_set.go:419] StatefulSet has been deleted provisioning-9859/csi-hostpath-attacher\nI0707 08:18:17.479238       1 event.go:291] \"Event occurred\" object=\"deployment-3513/test-rolling-update-controller\" kind=\"ReplicaSet\" apiVersion=\"apps/v1\" type=\"Normal\" reason=\"SuccessfulCreate\" message=\"Created pod: test-rolling-update-controller-zp8mt\"\nI0707 08:18:17.736499       1 stateful_set.go:419] StatefulSet has been deleted provisioning-9859/csi-hostpathplugin\nI0707 08:18:18.036975       1 namespace_controller.go:185] Namespace has been deleted provisioning-3249\nI0707 08:18:18.130180       1 stateful_set.go:419] StatefulSet has been deleted provisioning-9859/csi-hostpath-provisioner\nI0707 08:18:18.566079       1 stateful_set.go:419] StatefulSet has been deleted provisioning-9859/csi-hostpath-resizer\n