This job view page is being replaced by Spyglass soon. Check out the new job view.
PRcofyc: Prioritizing nodes based on volume capacity
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-02-23 10:07
Elapsed15m10s
Revisione681fffa7bba2670ee30041d3207c654bf6fae7b
Refs 96347

No Test Failures!


Error lines from build-log.txt

... skipping 57 lines ...
INFO: Invocation ID: e570c770-2edf-489b-b05a-22a7b3163905
Loading: 
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading: 0 packages loaded
Loading: 0 packages loaded
Analyzing: 4 targets (5 packages loaded, 0 targets configured)
INFO: SHA256 (https://golang.org/dl/?mode=json&include=all) = a6e405e31ea50626c4b0bcf821ba56fd4d7b90c0de5ec32a7c6323c73d7ba48d
Analyzing: 4 targets (22 packages loaded, 36 targets configured)
Analyzing: 4 targets (22 packages loaded, 36 targets configured)
Analyzing: 4 targets (37 packages loaded, 337 targets configured)
Analyzing: 4 targets (606 packages loaded, 8802 targets configured)
Analyzing: 4 targets (1788 packages loaded, 13812 targets configured)
Analyzing: 4 targets (2357 packages loaded, 16486 targets configured)
Analyzing: 4 targets (2357 packages loaded, 16486 targets configured)
Analyzing: 4 targets (2377 packages loaded, 16571 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:189:18: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages issue31540 (issue31540.go) and pointer (pointer.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: go get domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: go get old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
gazelle: finding module path for import titanic.biz/bar: go get titanic.biz/bar: module titanic.biz/bar: reading https://proxy.golang.org/titanic.biz/bar/@v/list: 410 Gone
	server response: not found: titanic.biz/bar@latest: unrecognized import path "titanic.biz/bar": reading https://titanic.biz/bar?go-get=1: 403 Forbidden
... skipping 66 lines ...
INFO: Build completed successfully, 5591 total actions
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: 53921e23-f857-4243-9f01-d35119c78f7d
Loading: 
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Analyzing: target //cmd/kubelet:kubelet (0 packages loaded, 0 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:189:18: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages issue31540 (issue31540.go) and pointer (pointer.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: go get domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: go get old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
gazelle: finding module path for import titanic.biz/bar: go get titanic.biz/bar: module titanic.biz/bar: reading https://proxy.golang.org/titanic.biz/bar/@v/list: 410 Gone
	server response: not found: titanic.biz/bar@latest: unrecognized import path "titanic.biz/bar": reading https://titanic.biz/bar?go-get=1: 403 Forbidden
... skipping 60 lines ...
INFO: Build completed successfully, 0 total actions
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: 06227d1f-6129-4141-8e5a-bea0da3331e3
Loading: 
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Analyzing: target //cmd/kubeadm:kubeadm (0 packages loaded, 0 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:189:18: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages issue31540 (issue31540.go) and pointer (pointer.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: go get domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: go get old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
gazelle: finding module path for import titanic.biz/bar: go get titanic.biz/bar: module titanic.biz/bar: reading https://proxy.golang.org/titanic.biz/bar/@v/list: 410 Gone
	server response: not found: titanic.biz/bar@latest: unrecognized import path "titanic.biz/bar": reading https://titanic.biz/bar?go-get=1: 403 Forbidden
... skipping 60 lines ...
INFO: Build completed successfully, 0 total actions
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: bde7d904-75c2-4230-ba82-203c65c634f2
Loading: 
Loading: 0 packages loaded
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Analyzing: target //cmd/kubectl:kubectl (0 packages loaded, 0 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:189:18: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages issue31540 (issue31540.go) and pointer (pointer.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: go get domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: go get old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
gazelle: finding module path for import titanic.biz/bar: go get titanic.biz/bar: module titanic.biz/bar: reading https://proxy.golang.org/titanic.biz/bar/@v/list: 410 Gone
	server response: not found: titanic.biz/bar@latest: unrecognized import path "titanic.biz/bar": reading https://titanic.biz/bar?go-get=1: 403 Forbidden
... skipping 149 lines ...
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 60)
WARNING: Waiting for server process to terminate (waited 30 seconds, waiting at most 60)
INFO: Waited 60 seconds for server process (pid=7117) to terminate.
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 10)
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 10)
INFO: Waited 10 seconds for server process (pid=7117) to terminate.
FATAL: Attempted to kill stale server process (pid=7117) using SIGKILL, but it did not die in a timely fashion.
+ true
+ pkill ^bazel
+ true
+ dirname /home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl
+ PATH=/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_:/home/prow/go/bin:/home/prow/go/bin:/usr/local/go/bin:/google-cloud-sdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+ export PATH
... skipping 50 lines ...
localAPIEndpoint:
  advertiseAddress: fc00:f853:ccd:e793::3
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: fc00:f853:ccd:e793::3
    provider-id: kind://docker/kind/kind-worker
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: fc00:f853:ccd:e793::3
    provider-id: kind://docker/kind/kind-worker
---
address: '::'
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
... skipping 43 lines ...
localAPIEndpoint:
  advertiseAddress: fc00:f853:ccd:e793::4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: fc00:f853:ccd:e793::4
    provider-id: kind://docker/kind/kind-worker2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: fc00:f853:ccd:e793::4
    provider-id: kind://docker/kind/kind-worker2
---
address: '::'
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
... skipping 43 lines ...
localAPIEndpoint:
  advertiseAddress: fc00:f853:ccd:e793::2
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: fc00:f853:ccd:e793::2
    provider-id: kind://docker/kind/kind-control-plane
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
... skipping 5 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: fc00:f853:ccd:e793::2
    provider-id: kind://docker/kind/kind-control-plane
---
address: '::'
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
... skipping 235 lines ...
I0223 10:15:58.787939     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0223 10:15:59.287783     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0223 10:15:59.787874     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 1 milliseconds
I0223 10:16:00.287622     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0223 10:16:00.787749     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0223 10:16:01.287934     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
I0223 10:16:05.907695     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 4120 milliseconds
I0223 10:16:06.289499     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0223 10:16:06.789623     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0223 10:16:07.288919     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
I0223 10:16:07.789710     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 2 milliseconds
I0223 10:16:07.789856     205 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[apiclient] All control plane components are healthy after 105.004650 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0223 10:16:07.795873     205 round_trippers.go:454] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 4 milliseconds
I0223 10:16:07.800872     205 round_trippers.go:454] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 3 milliseconds
... skipping 477 lines ...

Running in parallel across 25 nodes

Feb 23 10:17:34.014: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:17:34.017: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Feb 23 10:17:34.039: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Feb 23 10:17:34.142: INFO: The status of Pod kube-controller-manager-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:34.142: INFO: The status of Pod kube-proxy-l7fjj is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:34.142: INFO: The status of Pod kube-scheduler-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:34.142: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Feb 23 10:17:34.142: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Feb 23 10:17:34.142: INFO: POD                                         NODE                PHASE    GRACE  CONDITIONS
Feb 23 10:17:34.142: INFO: kube-controller-manager-kind-control-plane  kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  }]
Feb 23 10:17:34.142: INFO: kube-proxy-l7fjj                            kind-worker         Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:27 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:27 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  }]
Feb 23 10:17:34.142: INFO: kube-scheduler-kind-control-plane           kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  }]
Feb 23 10:17:34.142: INFO: 
Feb 23 10:17:36.157: INFO: The status of Pod kube-controller-manager-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:36.157: INFO: The status of Pod kube-proxy-mg2kq is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:36.157: INFO: The status of Pod kube-scheduler-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:36.157: INFO: 9 / 12 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Feb 23 10:17:36.157: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Feb 23 10:17:36.157: INFO: POD                                         NODE                PHASE    GRACE  CONDITIONS
Feb 23 10:17:36.157: INFO: kube-controller-manager-kind-control-plane  kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  }]
Feb 23 10:17:36.157: INFO: kube-proxy-mg2kq                            kind-worker         Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:35 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:35 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:35 +0000 UTC  }]
Feb 23 10:17:36.157: INFO: kube-scheduler-kind-control-plane           kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-scheduler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  }]
Feb 23 10:17:36.157: INFO: 
Feb 23 10:17:38.162: INFO: The status of Pod kube-controller-manager-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:38.162: INFO: The status of Pod kube-proxy-wc7sz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:38.162: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
Feb 23 10:17:38.162: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Feb 23 10:17:38.162: INFO: POD                                         NODE                PHASE    GRACE  CONDITIONS
Feb 23 10:17:38.162: INFO: kube-controller-manager-kind-control-plane  kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  }]
Feb 23 10:17:38.163: INFO: kube-proxy-wc7sz                            kind-worker2        Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:36 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:36 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  }]
Feb 23 10:17:38.163: INFO: 
Feb 23 10:17:40.158: INFO: The status of Pod kube-controller-manager-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:40.158: INFO: The status of Pod kube-proxy-wc7sz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:40.158: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
Feb 23 10:17:40.158: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Feb 23 10:17:40.158: INFO: POD                                         NODE                PHASE    GRACE  CONDITIONS
Feb 23 10:17:40.158: INFO: kube-controller-manager-kind-control-plane  kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  }]
Feb 23 10:17:40.158: INFO: kube-proxy-wc7sz                            kind-worker2        Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:36 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:36 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  }]
Feb 23 10:17:40.158: INFO: 
Feb 23 10:17:42.157: INFO: The status of Pod kube-controller-manager-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:42.157: INFO: The status of Pod kube-proxy-wc7sz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:42.157: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
Feb 23 10:17:42.157: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Feb 23 10:17:42.157: INFO: POD                                         NODE                PHASE    GRACE  CONDITIONS
Feb 23 10:17:42.157: INFO: kube-controller-manager-kind-control-plane  kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  }]
Feb 23 10:17:42.157: INFO: kube-proxy-wc7sz                            kind-worker2        Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:36 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:36 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  }]
Feb 23 10:17:42.157: INFO: 
Feb 23 10:17:44.158: INFO: The status of Pod kube-controller-manager-kind-control-plane is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:44.158: INFO: The status of Pod kube-proxy-wc7sz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Feb 23 10:17:44.158: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
Feb 23 10:17:44.158: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Feb 23 10:17:44.158: INFO: POD                                         NODE                PHASE    GRACE  CONDITIONS
Feb 23 10:17:44.158: INFO: kube-controller-manager-kind-control-plane  kind-control-plane  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC ContainersNotReady containers with unready status: [kube-controller-manager]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:14 +0000 UTC  }]
Feb 23 10:17:44.158: INFO: kube-proxy-wc7sz                            kind-worker2        Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:36 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:17:36 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-23 10:16:57 +0000 UTC  }]
Feb 23 10:17:44.158: INFO: 
... skipping 168 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:17:46.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1452" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes","total":-1,"completed":1,"skipped":21,"failed":0}

SSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] API priority and fairness
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:17:46.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3902" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes","total":-1,"completed":1,"skipped":6,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-auth] Metadata Concealment
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 66 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:17:47.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "clientset-619" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:17:48.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2769" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return pod details","total":-1,"completed":1,"skipped":2,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-instrumentation] Events API
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:17:48.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5695" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Netpol API
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:17:48.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "netpol-9668" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Netpol API should support creating NetworkPolicy API operations","total":-1,"completed":1,"skipped":13,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:17:49.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-212" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeFeature:RuntimeHandler]","total":-1,"completed":1,"skipped":18,"failed":0}

S
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:10.732 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should release NodePorts on delete
  test/e2e/network/service.go:1551
------------------------------
{"msg":"PASSED [sig-network] Services should release NodePorts on delete","total":-1,"completed":2,"skipped":39,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:11.386 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 30 lines ...
• [SLOW TEST:12.600 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
Feb 23 10:17:48.259: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
STEP: creating secret secrets-2648/secret-test-25880ddc-21e1-4fd6-a528-807d1263996c
STEP: Creating a pod to test consume secrets
Feb 23 10:17:48.273: INFO: Waiting up to 5m0s for pod "pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6" in namespace "secrets-2648" to be "Succeeded or Failed"
Feb 23 10:17:48.276: INFO: Pod "pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.098871ms
Feb 23 10:17:50.290: INFO: Pod "pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016869712s
Feb 23 10:17:52.534: INFO: Pod "pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260938791s
Feb 23 10:17:54.538: INFO: Pod "pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.265489247s
Feb 23 10:17:56.797: INFO: Pod "pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523999845s
Feb 23 10:17:58.814: INFO: Pod "pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.540851048s
STEP: Saw pod success
Feb 23 10:17:58.814: INFO: Pod "pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6" satisfied condition "Succeeded or Failed"
Feb 23 10:17:58.817: INFO: Trying to get logs from node kind-worker pod pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6 container env-test: <nil>
STEP: delete the pod
Feb 23 10:17:59.160: INFO: Waiting for pod pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6 to disappear
Feb 23 10:17:59.164: INFO: Pod pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.737 seconds]
[sig-api-machinery] Secrets
test/e2e/common/secrets.go:36
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":49,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0223 10:17:46.737228   18568 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Feb 23 10:17:46.737: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
STEP: Creating a pod to test env composition
Feb 23 10:17:46.776: INFO: Waiting up to 5m0s for pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7" in namespace "var-expansion-9392" to be "Succeeded or Failed"
Feb 23 10:17:46.794: INFO: Pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.300779ms
Feb 23 10:17:48.799: INFO: Pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023049172s
Feb 23 10:17:50.805: INFO: Pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028911715s
Feb 23 10:17:52.809: INFO: Pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032232217s
Feb 23 10:17:54.813: INFO: Pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036413173s
Feb 23 10:17:56.964: INFO: Pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.187459331s
Feb 23 10:17:58.966: INFO: Pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.190158666s
Feb 23 10:18:00.970: INFO: Pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.194027333s
STEP: Saw pod success
Feb 23 10:18:00.970: INFO: Pod "var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7" satisfied condition "Succeeded or Failed"
Feb 23 10:18:00.973: INFO: Trying to get logs from node kind-worker pod var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7 container dapi-container: <nil>
STEP: delete the pod
Feb 23 10:18:00.990: INFO: Waiting for pod var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7 to disappear
Feb 23 10:18:00.995: INFO: Pod var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.692 seconds]
[k8s.io] Variable Expansion
test/e2e/framework/framework.go:635
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:01.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-2211" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.","total":-1,"completed":2,"skipped":29,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Downward API
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:17:47.393: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
STEP: Creating a pod to test downward api env vars
Feb 23 10:17:49.630: INFO: Waiting up to 5m0s for pod "downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9" in namespace "downward-api-815" to be "Succeeded or Failed"
Feb 23 10:17:49.634: INFO: Pod "downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.601312ms
Feb 23 10:17:51.638: INFO: Pod "downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008169133s
Feb 23 10:17:53.671: INFO: Pod "downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041109944s
Feb 23 10:17:55.675: INFO: Pod "downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045198303s
Feb 23 10:17:57.680: INFO: Pod "downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05001402s
Feb 23 10:17:59.687: INFO: Pod "downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.056612907s
Feb 23 10:18:01.691: INFO: Pod "downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.060930679s
STEP: Saw pod success
Feb 23 10:18:01.691: INFO: Pod "downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9" satisfied condition "Succeeded or Failed"
Feb 23 10:18:01.694: INFO: Trying to get logs from node kind-worker2 pod downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9 container dapi-container: <nil>
STEP: delete the pod
Feb 23 10:18:02.230: INFO: Waiting for pod downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9 to disappear
Feb 23 10:18:02.233: INFO: Pod downward-api-3b54f125-de48-4f7d-8938-bad63f603fb9 no longer exists
[AfterEach] [k8s.io] [sig-node] Downward API
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:14.849 seconds]
[k8s.io] [sig-node] Downward API
test/e2e/framework/framework.go:635
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] NodeLease
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:02.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "node-lease-test-9695" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace","total":-1,"completed":3,"skipped":16,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:17.044 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:03.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "request-timeout-6708" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL","total":-1,"completed":2,"skipped":13,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
W0223 10:17:47.457563   18800 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
Feb 23 10:17:47.457: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp default which is unconfined [LinuxOnly]
  test/e2e/node/security_context.go:171
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Feb 23 10:17:47.467: INFO: Waiting up to 5m0s for pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66" in namespace "security-context-6895" to be "Succeeded or Failed"
Feb 23 10:17:47.470: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939416ms
Feb 23 10:17:49.483: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015318687s
Feb 23 10:17:51.487: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019406325s
Feb 23 10:17:53.601: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133922903s
Feb 23 10:17:55.611: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66": Phase="Pending", Reason="", readiness=false. Elapsed: 8.143456231s
Feb 23 10:17:57.622: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66": Phase="Pending", Reason="", readiness=false. Elapsed: 10.15515549s
Feb 23 10:17:59.627: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66": Phase="Pending", Reason="", readiness=false. Elapsed: 12.159559262s
Feb 23 10:18:01.630: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66": Phase="Pending", Reason="", readiness=false. Elapsed: 14.163288018s
Feb 23 10:18:03.634: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.166764761s
STEP: Saw pod success
Feb 23 10:18:03.634: INFO: Pod "security-context-03701a6a-005f-4014-bebf-7d0057f9af66" satisfied condition "Succeeded or Failed"
Feb 23 10:18:03.637: INFO: Trying to get logs from node kind-worker pod security-context-03701a6a-005f-4014-bebf-7d0057f9af66 container test-container: <nil>
STEP: delete the pod
Feb 23 10:18:03.652: INFO: Waiting for pod security-context-03701a6a-005f-4014-bebf-7d0057f9af66 to disappear
Feb 23 10:18:03.655: INFO: Pod security-context-03701a6a-005f-4014-bebf-7d0057f9af66 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:17.266 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:635
  should support seccomp default which is unconfined [LinuxOnly]
  test/e2e/node/security_context.go:171
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]","total":-1,"completed":1,"skipped":2,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:03.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4855" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
• [SLOW TEST:18.713 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
• [SLOW TEST:20.605 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should support cascading deletion of custom resources
  test/e2e/apimachinery/garbage_collector.go:920
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should support cascading deletion of custom resources","total":-1,"completed":1,"skipped":12,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
test/e2e/apps/framework.go:23
  Listing PodDisruptionBudgets for all namespaces
  test/e2e/apps/disruption.go:74
    should list and delete a collection of PodDisruptionBudgets
    test/e2e/apps/disruption.go:77
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets","total":-1,"completed":4,"skipped":33,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:17:48.978: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:640
STEP: creating the pod
Feb 23 10:17:49.987: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:186
Feb 23 10:18:08.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9048" for this suite.


• [SLOW TEST:19.767 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:635
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":2,"skipped":46,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] crictl
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 99 lines ...
• [SLOW TEST:8.141 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:635
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":62,"failed":0}
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:09.318: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 8 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:09.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6578" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":4,"skipped":62,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:17:46.902: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  test/e2e/node/security_context.go:89
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Feb 23 10:17:49.571: INFO: Waiting up to 5m0s for pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357" in namespace "security-context-3985" to be "Succeeded or Failed"
Feb 23 10:17:49.576: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 5.004733ms
Feb 23 10:17:51.582: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010693492s
Feb 23 10:17:53.601: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030135654s
Feb 23 10:17:55.611: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039621197s
Feb 23 10:17:57.618: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047036016s
Feb 23 10:17:59.622: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 10.050993165s
Feb 23 10:18:01.628: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 12.056926199s
Feb 23 10:18:03.632: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 14.060655969s
Feb 23 10:18:05.636: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 16.06500121s
Feb 23 10:18:07.640: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Pending", Reason="", readiness=false. Elapsed: 18.068824805s
Feb 23 10:18:09.644: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.072811519s
STEP: Saw pod success
Feb 23 10:18:09.644: INFO: Pod "security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357" satisfied condition "Succeeded or Failed"
Feb 23 10:18:09.648: INFO: Trying to get logs from node kind-worker pod security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357 container test-container: <nil>
STEP: delete the pod
Feb 23 10:18:09.687: INFO: Waiting for pod security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357 to disappear
Feb 23 10:18:09.693: INFO: Pod security-context-429db0be-9f0f-4b84-b7f8-8b719b0ac357 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:22.802 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:635
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
  test/e2e/node/security_context.go:89
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]","total":-1,"completed":1,"skipped":48,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:09.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1478" for this suite.

•S
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":5,"skipped":45,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:17:57.709: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp unconfined on the pod [LinuxOnly]
  test/e2e/node/security_context.go:157
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Feb 23 10:17:57.765: INFO: Waiting up to 5m0s for pod "security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e" in namespace "security-context-4084" to be "Succeeded or Failed"
Feb 23 10:17:57.770: INFO: Pod "security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.672649ms
Feb 23 10:17:59.774: INFO: Pod "security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008814958s
Feb 23 10:18:01.778: INFO: Pod "security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013071987s
Feb 23 10:18:03.783: INFO: Pod "security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.017922188s
Feb 23 10:18:05.787: INFO: Pod "security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.022560431s
Feb 23 10:18:07.792: INFO: Pod "security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.026911575s
Feb 23 10:18:09.813: INFO: Pod "security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.048326023s
STEP: Saw pod success
Feb 23 10:18:09.813: INFO: Pod "security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e" satisfied condition "Succeeded or Failed"
Feb 23 10:18:09.827: INFO: Trying to get logs from node kind-worker pod security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e container test-container: <nil>
STEP: delete the pod
Feb 23 10:18:09.904: INFO: Waiting for pod security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e to disappear
Feb 23 10:18:09.912: INFO: Pod security-context-97b24d4b-cee7-4dfe-a8ef-e16ddab0025e no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.231 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:635
  should support seccomp unconfined on the pod [LinuxOnly]
  test/e2e/node/security_context.go:157
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]","total":-1,"completed":2,"skipped":11,"failed":0}

S
------------------------------
[BeforeEach] [sig-instrumentation] Events API
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:09.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1980" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":2,"skipped":60,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 55 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:382
    should support exec using resource/name
    test/e2e/kubectl/kubectl.go:434
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec using resource/name","total":-1,"completed":1,"skipped":18,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 31 lines ...
• [SLOW TEST:13.286 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":2,"skipped":74,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Service endpoints latency
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 493 lines ...
• [SLOW TEST:28.794 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-scheduling] LimitRange
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
• [SLOW TEST:7.246 seconds]
[sig-scheduling] LimitRange
test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":3,"skipped":76,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:30.228 seconds]
[k8s.io] [sig-node] PreStop
test/e2e/framework/framework.go:635
  graceful pod terminated should wait until preStop hook completes the process
  test/e2e/node/pre_stop.go:170
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process","total":-1,"completed":1,"skipped":0,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
• [SLOW TEST:15.753 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":3,"skipped":14,"failed":0}

SSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:17:57.389: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:640
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:186
Feb 23 10:18:19.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-282" for this suite.


• [SLOW TEST:22.110 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":3,"skipped":46,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 60 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl patch
  test/e2e/kubectl/kubectl.go:1471
    should add annotations for pods in rc  [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":-1,"completed":2,"skipped":18,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":-1,"completed":2,"skipped":53,"failed":0}
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:14.450: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
Feb 23 10:18:14.541: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-46f015dc-cddd-4bec-a126-f267f42650a9" in namespace "security-context-test-5024" to be "Succeeded or Failed"
Feb 23 10:18:14.546: INFO: Pod "busybox-readonly-false-46f015dc-cddd-4bec-a126-f267f42650a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.414388ms
Feb 23 10:18:16.550: INFO: Pod "busybox-readonly-false-46f015dc-cddd-4bec-a126-f267f42650a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008520738s
Feb 23 10:18:18.555: INFO: Pod "busybox-readonly-false-46f015dc-cddd-4bec-a126-f267f42650a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013456925s
Feb 23 10:18:20.568: INFO: Pod "busybox-readonly-false-46f015dc-cddd-4bec-a126-f267f42650a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026994973s
Feb 23 10:18:22.581: INFO: Pod "busybox-readonly-false-46f015dc-cddd-4bec-a126-f267f42650a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039729122s
Feb 23 10:18:24.598: INFO: Pod "busybox-readonly-false-46f015dc-cddd-4bec-a126-f267f42650a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056595844s
Feb 23 10:18:24.598: INFO: Pod "busybox-readonly-false-46f015dc-cddd-4bec-a126-f267f42650a9" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:186
Feb 23 10:18:24.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5024" for this suite.


... skipping 103 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:382
    should support exec through kubectl proxy
    test/e2e/kubectl/kubectl.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy","total":-1,"completed":2,"skipped":18,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:19.521: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
STEP: Creating secret with name secret-test-0d2d6b29-a5ff-4cf6-9898-4e57d2cf00c7
STEP: Creating a pod to test consume secrets
Feb 23 10:18:19.602: INFO: Waiting up to 5m0s for pod "pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b" in namespace "secrets-2419" to be "Succeeded or Failed"
Feb 23 10:18:19.607: INFO: Pod "pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.782547ms
Feb 23 10:18:21.612: INFO: Pod "pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009839807s
Feb 23 10:18:23.617: INFO: Pod "pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01537464s
Feb 23 10:18:25.621: INFO: Pod "pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019256146s
Feb 23 10:18:27.630: INFO: Pod "pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02846035s
Feb 23 10:18:29.642: INFO: Pod "pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.040430679s
STEP: Saw pod success
Feb 23 10:18:29.643: INFO: Pod "pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b" satisfied condition "Succeeded or Failed"
Feb 23 10:18:29.646: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b container secret-env-test: <nil>
STEP: delete the pod
Feb 23 10:18:29.668: INFO: Waiting for pod pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b to disappear
Feb 23 10:18:29.672: INFO: Pod pod-secrets-12621ede-0bbf-4894-9699-2c060ccfa07b no longer exists
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.161 seconds]
[sig-api-machinery] Secrets
test/e2e/common/secrets.go:36
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":58,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for API chunking
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 76 lines ...
• [SLOW TEST:20.809 seconds]
[sig-api-machinery] Servers with support for API chunking
test/e2e/apimachinery/framework.go:23
  should return chunks of results for list calls
  test/e2e/apimachinery/chunking.go:77
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls","total":-1,"completed":3,"skipped":80,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-network] Networking
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
test/e2e/framework/framework.go:635
  Granular Checks: Pods
  test/e2e/common/networking.go:30
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0}

SSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":2,"skipped":24,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:25.072: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 46 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl copy
  test/e2e/kubectl/kubectl.go:1356
    should copy a file from a running Pod
    test/e2e/kubectl/kubectl.go:1373
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":3,"skipped":24,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:28.153 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:635
  should be restarted with a local redirect http liveness probe
  test/e2e/common/container_probe.go:250
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a local redirect http liveness probe","total":-1,"completed":3,"skipped":12,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
test/e2e/kubectl/framework.go:23
  With a server listening on 0.0.0.0
  test/e2e/kubectl/portforward.go:452
    should support forwarding over websockets
    test/e2e/kubectl/portforward.go:468
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets","total":-1,"completed":3,"skipped":42,"failed":0}

SSSSSSSSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":53,"failed":0}
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:24.632: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 15 lines ...
  test/e2e/common/runtime.go:41
    when running a container with a new image
    test/e2e/common/runtime.go:266
      should not be able to pull from private registry without secret [NodeConformance]
      test/e2e/common/runtime.go:388
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]","total":-1,"completed":4,"skipped":53,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 55 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:39.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7520" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0}

S
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace","total":-1,"completed":4,"skipped":63,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] client-go should negotiate
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
[AfterEach] [sig-api-machinery] client-go should negotiate
  test/e2e/framework/framework.go:186
Feb 23 10:18:40.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready

•
------------------------------
{"msg":"PASSED [sig-api-machinery] client-go should negotiate watch and report errors with accept \"application/vnd.kubernetes.protobuf,application/json\"","total":-1,"completed":3,"skipped":18,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 59 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl label
  test/e2e/kubectl/kubectl.go:1317
    should update the label on a resource  [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":-1,"completed":3,"skipped":31,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 56 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:150
    should support basic nodePort: udp functionality
    test/e2e/network/networking.go:386
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality","total":-1,"completed":2,"skipped":12,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:40.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-6069" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector","total":-1,"completed":3,"skipped":19,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 25 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:40.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-992" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":4,"skipped":24,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] health handlers
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:41.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "health-3008" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] health handlers should contain necessary checks","total":-1,"completed":4,"skipped":27,"failed":0}

SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Events
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:41.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2941" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":5,"skipped":43,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
• [SLOW TEST:32.329 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it
  test/e2e/apps/disruption.go:273
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it","total":-1,"completed":6,"skipped":49,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:53.737 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should remove pods when job is deleted
  test/e2e/apps/job.go:75
------------------------------
{"msg":"PASSED [sig-apps] Job should remove pods when job is deleted","total":-1,"completed":2,"skipped":19,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:33.388: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  test/e2e/node/security_context.go:69
STEP: Creating a pod to test pod.Spec.SecurityContext.SupplementalGroups
Feb 23 10:18:33.434: INFO: Waiting up to 5m0s for pod "security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4" in namespace "security-context-5603" to be "Succeeded or Failed"
Feb 23 10:18:33.437: INFO: Pod "security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.226042ms
Feb 23 10:18:35.441: INFO: Pod "security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007859697s
Feb 23 10:18:37.446: INFO: Pod "security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012036379s
Feb 23 10:18:39.449: INFO: Pod "security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015230965s
Feb 23 10:18:41.457: INFO: Pod "security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023360363s
Feb 23 10:18:43.468: INFO: Pod "security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.034514888s
STEP: Saw pod success
Feb 23 10:18:43.468: INFO: Pod "security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4" satisfied condition "Succeeded or Failed"
Feb 23 10:18:43.475: INFO: Trying to get logs from node kind-worker pod security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4 container test-container: <nil>
STEP: delete the pod
Feb 23 10:18:43.516: INFO: Waiting for pod security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4 to disappear
Feb 23 10:18:43.520: INFO: Pod security-context-9ed802eb-b599-4ce6-8972-c0c8901a35c4 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.150 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:635
  should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
  test/e2e/node/security_context.go:69
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]","total":-1,"completed":4,"skipped":120,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:45.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9897" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply apply set/view last-applied","total":-1,"completed":5,"skipped":142,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 64 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:382
    should contain last line of the log
    test/e2e/kubectl/kubectl.go:611
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should contain last line of the log","total":-1,"completed":3,"skipped":70,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:7.122 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":5,"skipped":29,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:6.113 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":7,"skipped":59,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:48.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2927" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":8,"skipped":65,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
[BeforeEach] [k8s.io] Pod Container lifecycle
  test/e2e/node/pods.go:446
[It] should not create extra sandbox if all containers are done
  test/e2e/node/pods.go:450
STEP: creating the pod that should always exit 0
STEP: submitting the pod to kubernetes
Feb 23 10:18:38.966: INFO: Waiting up to 5m0s for pod "pod-always-succeed2703750e-6333-4177-8fff-f80b625536de" in namespace "pods-8042" to be "Succeeded or Failed"
Feb 23 10:18:38.973: INFO: Pod "pod-always-succeed2703750e-6333-4177-8fff-f80b625536de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.836978ms
Feb 23 10:18:40.984: INFO: Pod "pod-always-succeed2703750e-6333-4177-8fff-f80b625536de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018274303s
Feb 23 10:18:42.989: INFO: Pod "pod-always-succeed2703750e-6333-4177-8fff-f80b625536de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022340446s
Feb 23 10:18:45.012: INFO: Pod "pod-always-succeed2703750e-6333-4177-8fff-f80b625536de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046067437s
Feb 23 10:18:47.024: INFO: Pod "pod-always-succeed2703750e-6333-4177-8fff-f80b625536de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057549972s
STEP: Saw pod success
Feb 23 10:18:47.024: INFO: Pod "pod-always-succeed2703750e-6333-4177-8fff-f80b625536de" satisfied condition "Succeeded or Failed"
STEP: Getting events about the pod
STEP: Checking events about the pod
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  test/e2e/framework/framework.go:186
Feb 23 10:18:49.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
test/e2e/framework/framework.go:635
  [k8s.io] Pod Container lifecycle
  test/e2e/framework/framework.go:635
    should not create extra sandbox if all containers are done
    test/e2e/node/pods.go:450
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pod Container lifecycle should not create extra sandbox if all containers are done","total":-1,"completed":5,"skipped":56,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:38.084: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
  test/e2e/common/security_context.go:330
Feb 23 10:18:38.128: INFO: Waiting up to 5m0s for pod "alpine-nnp-nil-fad74d85-338c-4971-b48c-44f8a51470cd" in namespace "security-context-test-8386" to be "Succeeded or Failed"
Feb 23 10:18:38.131: INFO: Pod "alpine-nnp-nil-fad74d85-338c-4971-b48c-44f8a51470cd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.2908ms
Feb 23 10:18:40.153: INFO: Pod "alpine-nnp-nil-fad74d85-338c-4971-b48c-44f8a51470cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024616566s
Feb 23 10:18:42.156: INFO: Pod "alpine-nnp-nil-fad74d85-338c-4971-b48c-44f8a51470cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027785444s
Feb 23 10:18:44.171: INFO: Pod "alpine-nnp-nil-fad74d85-338c-4971-b48c-44f8a51470cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042745967s
Feb 23 10:18:46.179: INFO: Pod "alpine-nnp-nil-fad74d85-338c-4971-b48c-44f8a51470cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051370685s
Feb 23 10:18:48.192: INFO: Pod "alpine-nnp-nil-fad74d85-338c-4971-b48c-44f8a51470cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063723763s
Feb 23 10:18:50.241: INFO: Pod "alpine-nnp-nil-fad74d85-338c-4971-b48c-44f8a51470cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.11284083s
Feb 23 10:18:50.241: INFO: Pod "alpine-nnp-nil-fad74d85-338c-4971-b48c-44f8a51470cd" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:186
Feb 23 10:18:50.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8386" for this suite.


... skipping 2 lines ...
test/e2e/framework/framework.go:635
  when creating containers with AllowPrivilegeEscalation
  test/e2e/common/security_context.go:291
    should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
    test/e2e/common/security_context.go:330
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]","total":-1,"completed":4,"skipped":26,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:15.703 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":2,"skipped":16,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:52.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-3012" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]","total":-1,"completed":6,"skipped":33,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
  test/e2e/common/runtime.go:41
    when starting a container that exits
    test/e2e/common/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:54.243: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 9 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:18:54.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1046" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] ConfigMap
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:54.556: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:640
STEP: Creating configMap that has name configmap-test-emptyKey-f994263a-8513-4062-b39c-ffc089a101be
[AfterEach] [k8s.io] [sig-node] ConfigMap
  test/e2e/framework/framework.go:186
Feb 23 10:18:54.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7977" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":3,"skipped":37,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl client-side validation
  test/e2e/kubectl/kubectl.go:988
    should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
    test/e2e/kubectl/kubectl.go:1033
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema","total":-1,"completed":3,"skipped":26,"failed":0}
[BeforeEach] [k8s.io] NodeLease
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:55.558: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename node-lease-test
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 71 lines ...
• [SLOW TEST:73.415 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:635
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
• [SLOW TEST:73.053 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent
  test/e2e/apps/cronjob.go:142
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent","total":-1,"completed":2,"skipped":39,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
• [SLOW TEST:14.220 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  Deployment should have a working scale subresource
  test/e2e/apps/deployment.go:136
------------------------------
{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource","total":-1,"completed":6,"skipped":60,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] Probing container
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:77.316 seconds]
[k8s.io] Probing container
test/e2e/framework/framework.go:635
  should be restarted by liveness probe after startup probe enables it
  test/e2e/common/container_probe.go:347
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted by liveness probe after startup probe enables it","total":-1,"completed":1,"skipped":27,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:19:03.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-6707" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget","total":-1,"completed":2,"skipped":31,"failed":0}

SS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:19:03.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "nettest-2720" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services","total":-1,"completed":7,"skipped":61,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Firewall rule
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:19:04.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7698" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":3,"skipped":38,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 23 10:18:14.640: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:14.644: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:14.662: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:14.667: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:14.667: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:19.728: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:19.739: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:19.804: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:19.826: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:19.826: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:24.701: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:24.709: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:24.743: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:24.759: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:24.759: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:29.682: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:29.689: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:29.703: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:29.708: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:29.708: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:34.679: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:34.683: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:34.697: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:34.703: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:34.703: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:39.692: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:39.696: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:39.722: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:39.726: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:39.726: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:44.787: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:44.800: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:44.921: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:44.934: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:44.934: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:49.705: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:49.731: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:49.753: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:49.757: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:49.757: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:54.721: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:54.731: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:54.748: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:54.753: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:54.753: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:59.680: INFO: Unable to read wheezy_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:59.684: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:59.695: INFO: Unable to read jessie_udp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:59.699: INFO: Unable to read jessie_tcp@PodARecord from pod dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14: the server could not find the requested resource (get pods dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14)
Feb 23 10:18:59.699: INFO: Lookups using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:19:04.725: INFO: DNS probes using dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:78.465 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 23 10:18:35.293: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:35.304: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:35.341: INFO: Unable to read jessie_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:35.346: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:35.346: INFO: Lookups using dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:40.453: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:40.483: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:40.509: INFO: Unable to read jessie_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:40.514: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:40.514: INFO: Lookups using dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:45.403: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:45.424: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:45.456: INFO: Unable to read jessie_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:45.469: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:45.470: INFO: Lookups using dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:50.515: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:50.617: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:50.665: INFO: Unable to read jessie_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:50.680: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:50.680: INFO: Lookups using dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:55.397: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:55.405: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:55.421: INFO: Unable to read jessie_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:55.430: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:18:55.430: INFO: Lookups using dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:19:00.359: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:19:00.363: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:19:00.374: INFO: Unable to read jessie_udp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:19:00.378: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b: the server could not find the requested resource (get pods dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b)
Feb 23 10:19:00.378: INFO: Lookups using dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:19:05.376: INFO: DNS probes using dns-3718/dns-test-e2a0cc51-f6c6-44cc-aa9c-ef155efd968b succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 5 lines ...
• [SLOW TEST:50.327 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 23 10:18:37.803: INFO: Unable to read wheezy_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:37.823: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:37.872: INFO: Unable to read jessie_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:37.878: INFO: Unable to read jessie_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:37.878: INFO: Lookups using dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:42.934: INFO: Unable to read wheezy_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:42.938: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:42.965: INFO: Unable to read jessie_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:42.970: INFO: Unable to read jessie_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:42.970: INFO: Lookups using dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:47.925: INFO: Unable to read wheezy_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:47.933: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:47.994: INFO: Unable to read jessie_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:47.999: INFO: Unable to read jessie_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:47.999: INFO: Lookups using dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:52.918: INFO: Unable to read wheezy_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:52.922: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:53.033: INFO: Unable to read jessie_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:53.046: INFO: Unable to read jessie_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:53.046: INFO: Lookups using dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:18:57.927: INFO: Unable to read wheezy_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:57.941: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:57.999: INFO: Unable to read jessie_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:58.007: INFO: Unable to read jessie_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:18:58.007: INFO: Lookups using dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:19:02.910: INFO: Unable to read wheezy_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:19:02.914: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:19:02.947: INFO: Unable to read jessie_udp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:19:02.951: INFO: Unable to read jessie_tcp@PodARecord from pod dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e: the server could not find the requested resource (get pods dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e)
Feb 23 10:19:02.952: INFO: Lookups using dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb 23 10:19:07.934: INFO: DNS probes using dns-863/dns-test-53a79a2e-b6ed-4ba8-bd84-cf5b94a65e9e succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:38.261 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for the cluster [LinuxOnly]
  test/e2e/network/dns.go:89
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]","total":-1,"completed":5,"skipped":72,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 17 lines ...
test/e2e/framework/framework.go:635
  when scheduling a busybox command that always fails in a pod
  test/e2e/common/kubelet.go:79
    should have an terminated reason [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":41,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
STEP: creating execpod-noendpoints on node kind-worker
Feb 23 10:19:01.864: INFO: Creating new exec pod
Feb 23 10:19:09.884: INFO: waiting up to 30s to connect to no-pods:80
STEP: hitting service no-pods:80 from pod execpod-noendpoints on node kind-worker
Feb 23 10:19:09.884: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-543 exec execpod-noendpointsbln2f -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80'
Feb 23 10:19:11.168: INFO: rc: 1
Feb 23 10:19:11.168: INFO: error contained 'REFUSED', as expected: error running /home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-543 exec execpod-noendpointsbln2f -- /bin/sh -x -c /agnhost connect --timeout=3s no-pods:80:
Command stdout:

stderr:
+ /agnhost connect '--timeout=3s' no-pods:80
REFUSED
command terminated with exit code 1

error:
exit status 1
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:186
Feb 23 10:19:11.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-543" for this suite.
[AfterEach] [sig-network] Services
... skipping 3 lines ...
• [SLOW TEST:9.362 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be rejected when no endpoints exist
  test/e2e/network/service.go:1958
------------------------------
{"msg":"PASSED [sig-network] Services should be rejected when no endpoints exist","total":-1,"completed":3,"skipped":41,"failed":0}

SSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [k8s.io] NodeLease when the NodeLease feature is enabled should have OwnerReferences set","total":-1,"completed":4,"skipped":26,"failed":0}
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:55.620: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 22 lines ...
• [SLOW TEST:18.088 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:635
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":26,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:35.651 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 89 lines ...
• [SLOW TEST:18.295 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
I0223 10:18:01.784163   18665 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0223 10:18:04.784448   18665 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Feb 23 10:18:04.821: INFO: Creating new exec pod
Feb 23 10:18:14.848: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:15.222: INFO: rc: 1
Feb 23 10:18:15.222: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:17.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:17.495: INFO: rc: 1
Feb 23 10:18:17.495: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:19.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:19.573: INFO: rc: 1
Feb 23 10:18:19.573: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:21.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:21.518: INFO: rc: 1
Feb 23 10:18:21.518: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:23.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:23.531: INFO: rc: 1
Feb 23 10:18:23.531: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:25.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:25.608: INFO: rc: 1
Feb 23 10:18:25.608: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:27.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:27.467: INFO: rc: 1
Feb 23 10:18:27.467: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:29.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:29.443: INFO: rc: 1
Feb 23 10:18:29.443: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:31.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:31.429: INFO: rc: 1
Feb 23 10:18:31.429: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:33.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:33.495: INFO: rc: 1
Feb 23 10:18:33.495: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:35.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:35.500: INFO: rc: 1
Feb 23 10:18:35.500: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:37.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:37.457: INFO: rc: 1
Feb 23 10:18:37.457: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:39.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:39.506: INFO: rc: 1
Feb 23 10:18:39.506: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:41.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:41.579: INFO: rc: 1
Feb 23 10:18:41.579: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:43.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:43.537: INFO: rc: 1
Feb 23 10:18:43.537: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:45.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:45.605: INFO: rc: 1
Feb 23 10:18:45.605: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:47.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:47.694: INFO: rc: 1
Feb 23 10:18:47.694: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:49.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:49.579: INFO: rc: 1
Feb 23 10:18:49.579: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:51.223: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:51.633: INFO: rc: 1
Feb 23 10:18:51.633: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:53.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:53.694: INFO: rc: 1
Feb 23 10:18:53.694: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:55.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:55.531: INFO: rc: 1
Feb 23 10:18:55.531: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:57.223: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:57.450: INFO: rc: 1
Feb 23 10:18:57.450: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:18:59.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:18:59.472: INFO: rc: 1
Feb 23 10:18:59.472: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:19:01.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:19:01.441: INFO: rc: 1
Feb 23 10:19:01.441: INFO: ExternalName service "services-1528/execpodqn666" failed to resolve to IP
Feb 23 10:19:03.222: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-1528 exec execpodqn666 -- /bin/sh -x -c nslookup nodeport-service.services-1528.svc.cluster.local'
Feb 23 10:19:03.553: INFO: stderr: "+ nslookup nodeport-service.services-1528.svc.cluster.local\n"
Feb 23 10:19:03.553: INFO: stdout: "Server:\t\tfd00:10:96::a\nAddress:\tfd00:10:96::a#53\n\nnodeport-service.services-1528.svc.cluster.local\tcanonical name = externalsvc.services-1528.svc.cluster.local.\nName:\texternalsvc.services-1528.svc.cluster.local\nAddress: fd00:10:96::f7e7\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-1528, will wait for the garbage collector to delete the pods
Feb 23 10:19:03.629: INFO: Deleting ReplicationController externalsvc took: 20.69505ms
Feb 23 10:19:03.730: INFO: Terminating ReplicationController externalsvc pods took: 100.279975ms
... skipping 9 lines ...
• [SLOW TEST:93.425 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":1,"skipped":15,"failed":0}

SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 81 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:150
    should function for client IP based session affinity: http [LinuxOnly]
    test/e2e/network/networking.go:415
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]","total":-1,"completed":5,"skipped":64,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 64 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl expose
  test/e2e/kubectl/kubectl.go:1234
    should create services for rc  [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":-1,"completed":2,"skipped":15,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] [sig-node] Downward API
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:19:04.574: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
  test/e2e/common/downward_api.go:109
STEP: Creating a pod to test downward api env vars
Feb 23 10:19:04.647: INFO: Waiting up to 5m0s for pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a" in namespace "downward-api-280" to be "Succeeded or Failed"
Feb 23 10:19:04.660: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.510768ms
Feb 23 10:19:06.664: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016261457s
Feb 23 10:19:08.669: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021419301s
Feb 23 10:19:10.674: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026469696s
Feb 23 10:19:12.677: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.029970256s
Feb 23 10:19:14.682: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.034524154s
... skipping 2 lines ...
Feb 23 10:19:20.694: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.046808599s
Feb 23 10:19:22.699: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.051787729s
Feb 23 10:19:24.706: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.058181829s
Feb 23 10:19:26.711: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.063243379s
Feb 23 10:19:28.714: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066902518s
STEP: Saw pod success
Feb 23 10:19:28.714: INFO: Pod "downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a" satisfied condition "Succeeded or Failed"
Feb 23 10:19:28.717: INFO: Trying to get logs from node kind-worker2 pod downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a container dapi-container: <nil>
STEP: delete the pod
Feb 23 10:19:28.731: INFO: Waiting for pod downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a to disappear
Feb 23 10:19:28.734: INFO: Pod downward-api-ba3a482a-5f9a-4a3c-9686-9880d6ba156a no longer exists
[AfterEach] [k8s.io] [sig-node] Downward API
  test/e2e/framework/framework.go:186
... skipping 66 lines ...
test/e2e/framework/framework.go:635
  when create a pod with lifecycle hook
  test/e2e/common/lifecycle_hook.go:43
    should execute poststart http hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":76,"failed":0}

SSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:35
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 11 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:19:30.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-5269" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls","total":-1,"completed":5,"skipped":90,"failed":0}

SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 27 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl server-side dry-run
  test/e2e/kubectl/kubectl.go:909
    should check if kubectl can dry-run update Pods [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":6,"skipped":66,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:14.133 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated
  test/e2e/apps/disruption.go:97
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated","total":-1,"completed":2,"skipped":37,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 95 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:635
    should adopt matching orphans and release non-matching pods
    test/e2e/apps/statefulset.go:165
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods","total":-1,"completed":2,"skipped":13,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Discovery
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:19:36.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-3072" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Discovery Custom resource should have storage version hash","total":-1,"completed":3,"skipped":32,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] Kubelet
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:19:37.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5270" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":37,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 73 lines ...
• [SLOW TEST:57.249 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":77,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
• [SLOW TEST:35.285 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should create endpoints for unready pods
  test/e2e/network/service.go:1614
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]","total":-1,"completed":4,"skipped":41,"failed":0}
[BeforeEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:19:28.748: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 20 lines ...
• [SLOW TEST:12.149 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:28.218 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a custom resource.
  test/e2e/apimachinery/resource_quota.go:585
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.","total":-1,"completed":5,"skipped":23,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:19:37.054: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
  test/e2e/common/security_context.go:212
Feb 23 10:19:37.153: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-6ca4d4ce-b004-4041-9250-f56755ba3857" in namespace "security-context-test-389" to be "Succeeded or Failed"
Feb 23 10:19:37.163: INFO: Pod "busybox-readonly-true-6ca4d4ce-b004-4041-9250-f56755ba3857": Phase="Pending", Reason="", readiness=false. Elapsed: 9.336561ms
Feb 23 10:19:39.166: INFO: Pod "busybox-readonly-true-6ca4d4ce-b004-4041-9250-f56755ba3857": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012723107s
Feb 23 10:19:41.171: INFO: Pod "busybox-readonly-true-6ca4d4ce-b004-4041-9250-f56755ba3857": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018196525s
Feb 23 10:19:43.186: INFO: Pod "busybox-readonly-true-6ca4d4ce-b004-4041-9250-f56755ba3857": Phase="Failed", Reason="", readiness=false. Elapsed: 6.033033438s
Feb 23 10:19:43.186: INFO: Pod "busybox-readonly-true-6ca4d4ce-b004-4041-9250-f56755ba3857" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:186
Feb 23 10:19:43.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-389" for this suite.


... skipping 2 lines ...
test/e2e/framework/framework.go:635
  When creating a pod with readOnlyRootFilesystem
  test/e2e/common/security_context.go:166
    should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
    test/e2e/common/security_context.go:212
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]","total":-1,"completed":5,"skipped":43,"failed":0}

SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:48.557: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:640
STEP: creating the pod
Feb 23 10:18:48.617: INFO: PodSpec: initContainers in spec.initContainers
Feb 23 10:19:43.744: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b0215612-ac53-4563-a643-7f05df882e51", GenerateName:"", Namespace:"init-container-6261", SelfLink:"", UID:"62b2cb77-413d-4b18-af23-ba4c5ec06c01", ResourceVersion:"8949", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63749672328, loc:(*time.Location)(0x78e8d40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"617187197"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003086060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003086080)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003086180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003086280)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-z8r62", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0030862a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-z8r62", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-z8r62", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-z8r62", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0030800d0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0029d4000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003080160)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003080180)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003080188), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00308018c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002c06020), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749672328, loc:(*time.Location)(0x78e8d40)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749672328, loc:(*time.Location)(0x78e8d40)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749672328, loc:(*time.Location)(0x78e8d40)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749672328, loc:(*time.Location)(0x78e8d40)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"fc00:f853:ccd:e793::3", PodIP:"fd00:10:244:2::36", PodIPs:[]v1.PodIP{v1.PodIP{IP:"fd00:10:244:2::36"}}, StartTime:(*v1.Time)(0xc003086320), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029d4150)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029d41c0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7", ContainerID:"containerd://e9f7e98608d6df9bab8a4cff127e70d3b8116c0ce513fdd42b07b86e5d33cdc8", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003086360), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003086340), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0030801e4)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:186
Feb 23 10:19:43.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6261" for this suite.


• [SLOW TEST:55.256 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:635
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":9,"skipped":66,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
Feb 23 10:19:43.567: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Feb 23 10:19:43.567: INFO: stdout: "controller-manager scheduler etcd-0"
STEP: getting details of componentstatuses
STEP: getting status of controller-manager
Feb 23 10:19:43.567: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=kubectl-6416 get componentstatuses controller-manager'
Feb 23 10:19:43.706: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Feb 23 10:19:43.706: INFO: stdout: "NAME                 STATUS      MESSAGE                                                                                       ERROR\ncontroller-manager   Unhealthy   Get \"http://127.0.0.1:10252/healthz\": dial tcp 127.0.0.1:10252: connect: connection refused   \n"
STEP: getting status of scheduler
Feb 23 10:19:43.707: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=kubectl-6416 get componentstatuses scheduler'
Feb 23 10:19:43.850: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Feb 23 10:19:43.850: INFO: stdout: "NAME        STATUS      MESSAGE                                                                                       ERROR\nscheduler   Unhealthy   Get \"http://127.0.0.1:10251/healthz\": dial tcp 127.0.0.1:10251: connect: connection refused   \n"
STEP: getting status of etcd-0
Feb 23 10:19:43.850: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=kubectl-6416 get componentstatuses etcd-0'
Feb 23 10:19:43.983: INFO: stderr: "Warning: v1 ComponentStatus is deprecated in v1.19+\n"
Feb 23 10:19:43.983: INFO: stdout: "NAME     STATUS    MESSAGE             ERROR\netcd-0   Healthy   {\"health\":\"true\"}   \n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:186
Feb 23 10:19:43.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6416" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses","total":-1,"completed":6,"skipped":64,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Mount propagation
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 90 lines ...
Feb 23 10:19:30.338: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:30.440: INFO: Exec stderr: ""
Feb 23 10:19:38.470: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c mkdir "/var/lib/kubelet/mount-propagation-2970"/host; mount -t tmpfs e2e-mount-propagation-host "/var/lib/kubelet/mount-propagation-2970"/host; echo host > "/var/lib/kubelet/mount-propagation-2970"/host/file] Namespace:mount-propagation-2970 PodName:hostexec-kind-worker2-5qv4l ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Feb 23 10:19:38.471: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:38.637: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2970 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:38.637: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:38.757: INFO: pod private mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:38.761: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2970 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:38.761: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:38.899: INFO: pod private mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:38.905: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2970 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:38.905: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:39.030: INFO: pod private mount private: stdout: "private", stderr: "" error: <nil>
Feb 23 10:19:39.033: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2970 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:39.033: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:39.200: INFO: pod private mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:39.206: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2970 PodName:private ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:39.206: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:39.451: INFO: pod private mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:39.462: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2970 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:39.462: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:39.770: INFO: pod default mount master: stdout: "", stderr: "cat: can't open '/mnt/test/master/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:39.782: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2970 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:39.782: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:40.022: INFO: pod default mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:40.025: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2970 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:40.025: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:40.147: INFO: pod default mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:40.150: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2970 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:40.150: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:40.264: INFO: pod default mount default: stdout: "default", stderr: "" error: <nil>
Feb 23 10:19:40.268: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2970 PodName:default ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:40.268: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:40.393: INFO: pod default mount host: stdout: "", stderr: "cat: can't open '/mnt/test/host/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:40.396: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2970 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:40.396: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:40.541: INFO: pod master mount master: stdout: "master", stderr: "" error: <nil>
Feb 23 10:19:40.545: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2970 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:40.545: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:40.706: INFO: pod master mount slave: stdout: "", stderr: "cat: can't open '/mnt/test/slave/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:40.718: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2970 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:40.718: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:40.874: INFO: pod master mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:40.880: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2970 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:40.882: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:41.021: INFO: pod master mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:41.031: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2970 PodName:master ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:41.031: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:41.140: INFO: pod master mount host: stdout: "host", stderr: "" error: <nil>
Feb 23 10:19:41.144: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/master/file] Namespace:mount-propagation-2970 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:41.144: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:41.247: INFO: pod slave mount master: stdout: "master", stderr: "" error: <nil>
Feb 23 10:19:41.250: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/slave/file] Namespace:mount-propagation-2970 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:41.250: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:41.363: INFO: pod slave mount slave: stdout: "slave", stderr: "" error: <nil>
Feb 23 10:19:41.367: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/private/file] Namespace:mount-propagation-2970 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:41.367: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:41.515: INFO: pod slave mount private: stdout: "", stderr: "cat: can't open '/mnt/test/private/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:41.518: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/default/file] Namespace:mount-propagation-2970 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:41.518: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:41.667: INFO: pod slave mount default: stdout: "", stderr: "cat: can't open '/mnt/test/default/file': No such file or directory" error: command terminated with exit code 1
Feb 23 10:19:41.674: INFO: ExecWithOptions {Command:[/bin/sh -c cat /mnt/test/host/file] Namespace:mount-propagation-2970 PodName:slave ContainerName:cntr Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false}
Feb 23 10:19:41.674: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:41.809: INFO: pod slave mount host: stdout: "host", stderr: "" error: <nil>
Feb 23 10:19:41.809: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test `cat "/var/lib/kubelet/mount-propagation-2970"/master/file` = master] Namespace:mount-propagation-2970 PodName:hostexec-kind-worker2-5qv4l ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Feb 23 10:19:41.809: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:41.943: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c test ! -e "/var/lib/kubelet/mount-propagation-2970"/slave/file] Namespace:mount-propagation-2970 PodName:hostexec-kind-worker2-5qv4l ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Feb 23 10:19:41.943: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Feb 23 10:19:42.087: INFO: ExecWithOptions {Command:[nsenter --mount=/rootfs/proc/1/ns/mnt -- sh -c umount "/var/lib/kubelet/mount-propagation-2970"/host] Namespace:mount-propagation-2970 PodName:hostexec-kind-worker2-5qv4l ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:true Quiet:false}
Feb 23 10:19:42.087: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 21 lines ...
• [SLOW TEST:62.724 seconds]
[k8s.io] [sig-node] Mount propagation
test/e2e/framework/framework.go:635
  should propagate mounts to the host
  test/e2e/node/mount_propagation.go:82
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Mount propagation should propagate mounts to the host","total":-1,"completed":6,"skipped":67,"failed":0}

SSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] PreStop
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
• [SLOW TEST:39.188 seconds]
[k8s.io] [sig-node] PreStop
test/e2e/framework/framework.go:635
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":-1,"completed":6,"skipped":81,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 80 lines ...
• [SLOW TEST:91.745 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":89,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
  test/e2e/common/runtime.go:41
    on terminated container
    test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":74,"failed":0}

SS
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 51 lines ...
• [SLOW TEST:12.300 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":-1,"completed":6,"skipped":58,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:21.276 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":3,"skipped":47,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 23 lines ...
• [SLOW TEST:12.357 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:635
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":89,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-scheduling] Multi-AZ Clusters
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 90 lines ...
• [SLOW TEST:53.763 seconds]
[sig-network] Networking
test/e2e/network/framework.go:23
  should check kube-proxy urls
  test/e2e/network/networking.go:137
------------------------------
{"msg":"PASSED [sig-network] Networking should check kube-proxy urls","total":-1,"completed":3,"skipped":13,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 39 lines ...
  test/e2e/kubectl/portforward.go:474
    that expects a client request
    test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends NO DATA, and disconnects
      test/e2e/kubectl/portforward.go:476
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects","total":-1,"completed":5,"skipped":82,"failed":0}

SSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl replace
  test/e2e/kubectl/kubectl.go:1556
    should update a single-container pod's image  [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":-1,"completed":7,"skipped":77,"failed":0}

SS
------------------------------
[BeforeEach] [k8s.io] [sig-node] ConfigMap
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:20:01.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1806" for this suite.

•
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] ConfigMap should update ConfigMap successfully","total":-1,"completed":8,"skipped":79,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 63 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl logs
  test/e2e/kubectl/kubectl.go:1394
    should be able to retrieve and filter logs  [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":-1,"completed":10,"skipped":71,"failed":0}

SSSSS
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":3,"skipped":16,"failed":0}
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:19:33.979: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 243 lines ...
test/e2e/kubectl/framework.go:23
  Guestbook application
  test/e2e/kubectl/kubectl.go:342
    should create and stop a working application  [Conformance]
    test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":-1,"completed":5,"skipped":95,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 26 lines ...
• [SLOW TEST:10.963 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:635
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":90,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 37 lines ...
Feb 23 10:19:58.703: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=crd-publish-openapi-5249 explain e2e-test-crd-publish-openapi-7116-crds.spec'
Feb 23 10:19:59.044: INFO: stderr: ""
Feb 23 10:19:59.044: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7116-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb 23 10:19:59.044: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=crd-publish-openapi-5249 explain e2e-test-crd-publish-openapi-7116-crds.spec.bars'
Feb 23 10:19:59.403: INFO: stderr: ""
Feb 23 10:19:59.403: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7116-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb 23 10:19:59.403: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=crd-publish-openapi-5249 explain e2e-test-crd-publish-openapi-7116-crds.spec.bars2'
Feb 23 10:19:59.775: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:186
Feb 23 10:20:11.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5249" for this suite.
... skipping 2 lines ...
• [SLOW TEST:21.338 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":8,"skipped":90,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 71 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:150
    should function for pod-Service: udp
    test/e2e/network/networking.go:167
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should function for pod-Service: udp","total":-1,"completed":6,"skipped":48,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 40 lines ...
• [SLOW TEST:25.842 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":7,"skipped":133,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-apps] CronJob
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:50.755: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename cronjob
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] CronJob
  test/e2e/apps/cronjob.go:58
[It] should delete failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:273
STEP: Creating an AllowConcurrent cronjob with custom history limit
STEP: Ensuring a finished job exists
STEP: Ensuring a finished job exists by listing jobs explicitly
STEP: Ensuring this job and its pods does not exist anymore
STEP: Ensuring there is 1 finished job by listing jobs explicitly
... skipping 4 lines ...
STEP: Destroying namespace "cronjob-3419" for this suite.


• [SLOW TEST:82.355 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should delete failed finished jobs with limit of one job
  test/e2e/apps/cronjob.go:273
------------------------------
S
------------------------------
{"msg":"PASSED [sig-apps] CronJob should delete failed finished jobs with limit of one job","total":-1,"completed":3,"skipped":54,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
STEP: Destroying namespace "services-3000" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:744

•
------------------------------
{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":7,"skipped":67,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:34.212 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":6,"skipped":95,"failed":0}

SSSSS
------------------------------
[BeforeEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:20:08.083: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
STEP: Creating a pod to test substitution in container's args
Feb 23 10:20:08.267: INFO: Waiting up to 5m0s for pod "var-expansion-683d22e0-11f7-4821-a377-fd61cb625151" in namespace "var-expansion-7414" to be "Succeeded or Failed"
Feb 23 10:20:08.307: INFO: Pod "var-expansion-683d22e0-11f7-4821-a377-fd61cb625151": Phase="Pending", Reason="", readiness=false. Elapsed: 39.835213ms
Feb 23 10:20:10.318: INFO: Pod "var-expansion-683d22e0-11f7-4821-a377-fd61cb625151": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050956454s
Feb 23 10:20:12.362: INFO: Pod "var-expansion-683d22e0-11f7-4821-a377-fd61cb625151": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095201712s
Feb 23 10:20:14.366: INFO: Pod "var-expansion-683d22e0-11f7-4821-a377-fd61cb625151": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09894659s
Feb 23 10:20:16.370: INFO: Pod "var-expansion-683d22e0-11f7-4821-a377-fd61cb625151": Phase="Pending", Reason="", readiness=false. Elapsed: 8.102920794s
Feb 23 10:20:18.385: INFO: Pod "var-expansion-683d22e0-11f7-4821-a377-fd61cb625151": Phase="Pending", Reason="", readiness=false. Elapsed: 10.11833715s
Feb 23 10:20:20.399: INFO: Pod "var-expansion-683d22e0-11f7-4821-a377-fd61cb625151": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.131810806s
STEP: Saw pod success
Feb 23 10:20:20.399: INFO: Pod "var-expansion-683d22e0-11f7-4821-a377-fd61cb625151" satisfied condition "Succeeded or Failed"
Feb 23 10:20:20.404: INFO: Trying to get logs from node kind-worker2 pod var-expansion-683d22e0-11f7-4821-a377-fd61cb625151 container dapi-container: <nil>
STEP: delete the pod
Feb 23 10:20:20.443: INFO: Waiting for pod var-expansion-683d22e0-11f7-4821-a377-fd61cb625151 to disappear
Feb 23 10:20:20.446: INFO: Pod var-expansion-683d22e0-11f7-4821-a377-fd61cb625151 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.382 seconds]
[k8s.io] Variable Expansion
test/e2e/framework/framework.go:635
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":125,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 65 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:382
    should support exec
    test/e2e/kubectl/kubectl.go:394
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support exec","total":-1,"completed":4,"skipped":52,"failed":0}

SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] StatefulSet
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 68 lines ...
test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/framework/framework.go:635
    should not deadlock when a pod's predecessor fails
    test/e2e/apps/statefulset.go:250
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails","total":-1,"completed":3,"skipped":13,"failed":0}

SSSSS
------------------------------
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:12.769 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":8,"skipped":166,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] Services should create endpoints for unready pods","total":-1,"completed":8,"skipped":67,"failed":0}
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:19:39.165: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 49 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:150
    should be able to handle large requests: http
    test/e2e/network/networking.go:450
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should be able to handle large requests: http","total":-1,"completed":9,"skipped":67,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 38 lines ...
• [SLOW TEST:32.660 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":4,"skipped":49,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-instrumentation] MetricsGrabber
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 10 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:20:28.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "metrics-grabber-5446" for this suite.

•
------------------------------
{"msg":"PASSED [sig-instrumentation] MetricsGrabber should grab all metrics from API server.","total":-1,"completed":5,"skipped":58,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [sig-network] EndpointSlice
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:20:28.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-426" for this suite.

•
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server","total":-1,"completed":6,"skipped":65,"failed":0}

SSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 99 lines ...
• [SLOW TEST:25.190 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":11,"skipped":76,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 36 lines ...
• [SLOW TEST:36.079 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
  test/e2e/network/service.go:1169
------------------------------
{"msg":"PASSED [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node","total":-1,"completed":7,"skipped":91,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 6 lines ...
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0223 10:19:28.389932   18612 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Feb 23 10:20:30.410: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
Feb 23 10:20:30.410: INFO: Deleting pod "simpletest-rc-to-be-deleted-2nmsm" in namespace "gc-4565"
Feb 23 10:20:30.432: INFO: Deleting pod "simpletest-rc-to-be-deleted-8slx9" in namespace "gc-4565"
Feb 23 10:20:30.450: INFO: Deleting pod "simpletest-rc-to-be-deleted-9dptc" in namespace "gc-4565"
Feb 23 10:20:30.474: INFO: Deleting pod "simpletest-rc-to-be-deleted-9x2ph" in namespace "gc-4565"
Feb 23 10:20:30.535: INFO: Deleting pod "simpletest-rc-to-be-deleted-bpnpf" in namespace "gc-4565"
[AfterEach] [sig-api-machinery] Garbage collector
... skipping 5 lines ...
• [SLOW TEST:72.421 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":3,"skipped":19,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 18 lines ...
• [SLOW TEST:60.298 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":6,"skipped":111,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Server request timeout
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 32 lines ...
• [SLOW TEST:10.868 seconds]
[sig-auth] Certificates API [Privileged:ClusterAdmin]
test/e2e/auth/framework.go:23
  should support building a client with a CSR
  test/e2e/auth/certificates.go:55
------------------------------
{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR","total":-1,"completed":7,"skipped":127,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:20:15.200: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  test/e2e/common/security_context.go:41
[It] should run with an image specified user ID
  test/e2e/common/security_context.go:146
Feb 23 10:20:15.313: INFO: Waiting up to 5m0s for pod "implicit-nonroot-uid" in namespace "security-context-test-3878" to be "Succeeded or Failed"
Feb 23 10:20:15.322: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 7.873531ms
Feb 23 10:20:17.413: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099279295s
Feb 23 10:20:19.425: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110684451s
Feb 23 10:20:21.430: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11620367s
Feb 23 10:20:23.435: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12058282s
Feb 23 10:20:25.453: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 10.138551119s
Feb 23 10:20:27.459: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 12.145322176s
Feb 23 10:20:29.496: INFO: Pod "implicit-nonroot-uid": Phase="Pending", Reason="", readiness=false. Elapsed: 14.182403088s
Feb 23 10:20:31.504: INFO: Pod "implicit-nonroot-uid": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.189740831s
Feb 23 10:20:31.504: INFO: Pod "implicit-nonroot-uid" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  test/e2e/framework/framework.go:186
Feb 23 10:20:31.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3878" for this suite.


... skipping 2 lines ...
test/e2e/framework/framework.go:635
  When creating a container with runAsNonRoot
  test/e2e/common/security_context.go:99
    should run with an image specified user ID
    test/e2e/common/security_context.go:146
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID","total":-1,"completed":7,"skipped":100,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
• [SLOW TEST:20.654 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  test/e2e/apps/job.go:48
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":7,"skipped":95,"failed":0}

SSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:35
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:64
[It] should support unsafe sysctls which are actually whitelisted
  test/e2e/common/sysctl.go:108
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:8.169 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
test/e2e/framework/framework.go:635
  should support unsafe sysctls which are actually whitelisted
  test/e2e/common/sysctl.go:108
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted","total":-1,"completed":5,"skipped":73,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
STEP: Destroying namespace "services-7595" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:744

•
------------------------------
{"msg":"PASSED [sig-network] Services should prevent NodePort collisions","total":-1,"completed":6,"skipped":121,"failed":0}

S
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 39 lines ...
  test/e2e/kubectl/portforward.go:474
    that expects NO client request
    test/e2e/kubectl/portforward.go:484
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:485
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":4,"skipped":55,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 57 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:382
    should support port-forward
    test/e2e/kubectl/kubectl.go:625
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support port-forward","total":-1,"completed":9,"skipped":94,"failed":0}

S
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":8,"skipped":74,"failed":0}
[BeforeEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:20:27.232: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 33 lines ...
• [SLOW TEST:12.324 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":9,"skipped":74,"failed":0}

SSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:35
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/common/sysctl.go:64
[It] should support sysctls
  test/e2e/common/sysctl.go:68
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.155 seconds]
[k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
test/e2e/framework/framework.go:635
  should support sysctls
  test/e2e/common/sysctl.go:68
------------------------------
{"msg":"PASSED [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls","total":-1,"completed":12,"skipped":85,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 52 lines ...
    Only supported for node OS distro [gci ubuntu] (not debian)

    test/e2e/framework/skipper/skipper.go:267
------------------------------
SSS
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource","total":-1,"completed":7,"skipped":77,"failed":0}
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:20:41.580: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 95 lines ...
test/e2e/kubectl/framework.go:23
  Simple pod
  test/e2e/kubectl/kubectl.go:382
    should support inline execution and attach
    test/e2e/kubectl/kubectl.go:551
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":7,"skipped":35,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Conntrack
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 58 lines ...
• [SLOW TEST:45.314 seconds]
[sig-network] Conntrack
test/e2e/network/framework.go:23
  should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
  test/e2e/network/conntrack.go:202
------------------------------
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":9,"skipped":98,"failed":0}

SSSSSSSSSSSSSSSSSSS
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":4,"skipped":16,"failed":0}
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:20:07.029: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 29 lines ...
• [SLOW TEST:39.971 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:635
  should support pod readiness gates [NodeFeature:PodReadinessGate]
  test/e2e/common/pods.go:778
------------------------------
{"msg":"PASSED [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]","total":-1,"completed":5,"skipped":16,"failed":0}

SSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:20:31.361: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser [LinuxOnly]
  test/e2e/node/security_context.go:103
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Feb 23 10:20:31.483: INFO: Waiting up to 5m0s for pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54" in namespace "security-context-1653" to be "Succeeded or Failed"
Feb 23 10:20:31.489: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54": Phase="Pending", Reason="", readiness=false. Elapsed: 5.164339ms
Feb 23 10:20:33.492: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008351058s
Feb 23 10:20:35.509: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025101421s
Feb 23 10:20:37.535: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051587204s
Feb 23 10:20:39.541: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057552959s
Feb 23 10:20:41.545: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54": Phase="Pending", Reason="", readiness=false. Elapsed: 10.061435851s
Feb 23 10:20:43.549: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54": Phase="Pending", Reason="", readiness=false. Elapsed: 12.065500045s
Feb 23 10:20:45.554: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54": Phase="Pending", Reason="", readiness=false. Elapsed: 14.070978238s
Feb 23 10:20:47.559: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.075308803s
STEP: Saw pod success
Feb 23 10:20:47.559: INFO: Pod "security-context-e69432aa-914a-421b-b1b9-ade74c018f54" satisfied condition "Succeeded or Failed"
Feb 23 10:20:47.561: INFO: Trying to get logs from node kind-worker pod security-context-e69432aa-914a-421b-b1b9-ade74c018f54 container test-container: <nil>
STEP: delete the pod
Feb 23 10:20:47.581: INFO: Waiting for pod security-context-e69432aa-914a-421b-b1b9-ade74c018f54 to disappear
Feb 23 10:20:47.584: INFO: Pod security-context-e69432aa-914a-421b-b1b9-ade74c018f54 no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:16.231 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:635
  should support container.SecurityContext.RunAsUser [LinuxOnly]
  test/e2e/node/security_context.go:103
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":8,"skipped":136,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [k8s.io] Container Runtime
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:20:37.366: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 23 10:20:49.617: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 9 lines ...
  test/e2e/common/runtime.go:41
    on terminated container
    test/e2e/common/runtime.go:134
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":95,"failed":0}

SSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 163 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:150
    should update endpoints: udp
    test/e2e/network/networking.go:350
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: udp","total":-1,"completed":5,"skipped":75,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 8 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:20:51.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-2885" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":6,"skipped":84,"failed":0}

S
------------------------------
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 62 lines ...
• [SLOW TEST:25.466 seconds]
[k8s.io] KubeletManagedEtcHosts
test/e2e/framework/framework.go:635
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":201,"failed":0}

SSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:20:39.589: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
STEP: Creating a pod to test override all
Feb 23 10:20:39.646: INFO: Waiting up to 5m0s for pod "client-containers-1880603e-8574-4593-9f4b-963763107b59" in namespace "containers-962" to be "Succeeded or Failed"
Feb 23 10:20:39.660: INFO: Pod "client-containers-1880603e-8574-4593-9f4b-963763107b59": Phase="Pending", Reason="", readiness=false. Elapsed: 14.597131ms
Feb 23 10:20:41.667: INFO: Pod "client-containers-1880603e-8574-4593-9f4b-963763107b59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021261862s
Feb 23 10:20:43.671: INFO: Pod "client-containers-1880603e-8574-4593-9f4b-963763107b59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02527951s
Feb 23 10:20:45.676: INFO: Pod "client-containers-1880603e-8574-4593-9f4b-963763107b59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029998445s
Feb 23 10:20:47.684: INFO: Pod "client-containers-1880603e-8574-4593-9f4b-963763107b59": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038397635s
Feb 23 10:20:49.689: INFO: Pod "client-containers-1880603e-8574-4593-9f4b-963763107b59": Phase="Pending", Reason="", readiness=false. Elapsed: 10.043700341s
Feb 23 10:20:51.694: INFO: Pod "client-containers-1880603e-8574-4593-9f4b-963763107b59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.048178992s
STEP: Saw pod success
Feb 23 10:20:51.694: INFO: Pod "client-containers-1880603e-8574-4593-9f4b-963763107b59" satisfied condition "Succeeded or Failed"
Feb 23 10:20:51.698: INFO: Trying to get logs from node kind-worker pod client-containers-1880603e-8574-4593-9f4b-963763107b59 container agnhost-container: <nil>
STEP: delete the pod
Feb 23 10:20:51.713: INFO: Waiting for pod client-containers-1880603e-8574-4593-9f4b-963763107b59 to disappear
Feb 23 10:20:51.716: INFO: Pod client-containers-1880603e-8574-4593-9f4b-963763107b59 no longer exists
[AfterEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:12.136 seconds]
[k8s.io] Docker Containers
test/e2e/framework/framework.go:635
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":94,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 9 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:20:51.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7012" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls","total":-1,"completed":11,"skipped":95,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl Port forwarding
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 43 lines ...
  test/e2e/kubectl/portforward.go:474
    that expects a client request
    test/e2e/kubectl/portforward.go:475
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portforward.go:479
------------------------------
{"msg":"PASSED [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects","total":-1,"completed":10,"skipped":102,"failed":0}

SSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 34 lines ...
• [SLOW TEST:8.351 seconds]
[k8s.io] Pods
test/e2e/framework/framework.go:635
  should run through the lifecycle of Pods and PodStatus [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":8,"skipped":54,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:9.618 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should allow pods to hairpin back to themselves through services
  test/e2e/network/service.go:979
------------------------------
{"msg":"PASSED [sig-network] Services should allow pods to hairpin back to themselves through services","total":-1,"completed":11,"skipped":108,"failed":0}

SSSS
------------------------------
[BeforeEach] [sig-network] Services
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 50 lines ...
Feb 23 10:19:47.462: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:19:49.466: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:19:51.466: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:19:53.479: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:19:55.492: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:19:57.475: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Feb 23 10:19:57.475: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-4817 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://[fd00:10:96::d288]:80 && echo service-down-failed'
Feb 23 10:19:59.286: INFO: rc: 7
Feb 23 10:19:59.286: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://[fd00:10:96::d288]:80 && echo service-down-failed" in pod services-4817/verify-service-down-host-exec-pod: error running /home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-4817 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://[fd00:10:96::d288]:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 'http://[fd00:10:96::d288]:80'
command terminated with exit code 7

error:
exit status 7
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4817
STEP: adding service-proxy-name label
STEP: verifying service is not up
Feb 23 10:19:59.468: INFO: Creating new host exec pod
... skipping 7 lines ...
Feb 23 10:20:13.572: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:20:15.578: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:20:17.620: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:20:19.570: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:20:21.573: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:20:23.570: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Feb 23 10:20:23.570: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-4817 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://[fd00:10:96::2db4]:80 && echo service-down-failed'
Feb 23 10:20:24.889: INFO: rc: 7
Feb 23 10:20:24.889: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://[fd00:10:96::2db4]:80 && echo service-down-failed" in pod services-4817/verify-service-down-host-exec-pod: error running /home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-4817 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://[fd00:10:96::2db4]:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 'http://[fd00:10:96::2db4]:80'
command terminated with exit code 7

error:
exit status 7
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4817
STEP: removing service-proxy-name annotation
STEP: verifying service is up
Feb 23 10:20:25.074: INFO: Creating new host exec pod
... skipping 18 lines ...
STEP: verifying service-disabled is still not up
Feb 23 10:20:52.341: INFO: Creating new host exec pod
Feb 23 10:20:52.353: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:20:54.358: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:20:56.358: INFO: The status of Pod verify-service-down-host-exec-pod is Pending, waiting for it to be Running (with Ready = true)
Feb 23 10:20:58.358: INFO: The status of Pod verify-service-down-host-exec-pod is Running (Ready = true)
Feb 23 10:20:58.358: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-4817 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://[fd00:10:96::d288]:80 && echo service-down-failed'
Feb 23 10:20:59.584: INFO: rc: 7
Feb 23 10:20:59.584: INFO: error while kubectl execing "curl -g -s --connect-timeout 2 http://[fd00:10:96::d288]:80 && echo service-down-failed" in pod services-4817/verify-service-down-host-exec-pod: error running /home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=services-4817 exec verify-service-down-host-exec-pod -- /bin/sh -x -c curl -g -s --connect-timeout 2 http://[fd00:10:96::d288]:80 && echo service-down-failed:
Command stdout:

stderr:
+ curl -g -s --connect-timeout 2 'http://[fd00:10:96::d288]:80'
command terminated with exit code 7

error:
exit status 7
Output: 
STEP: Deleting pod verify-service-down-host-exec-pod in namespace services-4817
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:186
Feb 23 10:20:59.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 5 lines ...
• [SLOW TEST:129.273 seconds]
[sig-network] Services
test/e2e/network/framework.go:23
  should implement service.kubernetes.io/service-proxy-name
  test/e2e/network/service.go:1855
------------------------------
{"msg":"PASSED [sig-network] Services should implement service.kubernetes.io/service-proxy-name","total":-1,"completed":5,"skipped":31,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:20:52.459: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support seccomp runtime/default [LinuxOnly]
  test/e2e/node/security_context.go:164
STEP: Creating a pod to test seccomp.security.alpha.kubernetes.io/pod
Feb 23 10:20:52.558: INFO: Waiting up to 5m0s for pod "security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d" in namespace "security-context-9202" to be "Succeeded or Failed"
Feb 23 10:20:52.563: INFO: Pod "security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.98677ms
Feb 23 10:20:54.568: INFO: Pod "security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00992372s
Feb 23 10:20:56.573: INFO: Pod "security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014614353s
Feb 23 10:20:58.576: INFO: Pod "security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.018304007s
Feb 23 10:21:00.580: INFO: Pod "security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.021784242s
Feb 23 10:21:02.584: INFO: Pod "security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.025888714s
STEP: Saw pod success
Feb 23 10:21:02.584: INFO: Pod "security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d" satisfied condition "Succeeded or Failed"
Feb 23 10:21:02.587: INFO: Trying to get logs from node kind-worker pod security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d container test-container: <nil>
STEP: delete the pod
Feb 23 10:21:02.602: INFO: Waiting for pod security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d to disappear
Feb 23 10:21:02.605: INFO: Pod security-context-2f4ccbf9-8c3d-4f21-a47b-3bf8fb20346d no longer exists
[AfterEach] [k8s.io] [sig-node] Security Context
  test/e2e/framework/framework.go:186
... skipping 4 lines ...
• [SLOW TEST:10.156 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:635
  should support seccomp runtime/default [LinuxOnly]
  test/e2e/node/security_context.go:164
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]","total":-1,"completed":11,"skipped":106,"failed":0}

SSSSSSSS
------------------------------
{"msg":"PASSED [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]","total":-1,"completed":6,"skipped":192,"failed":0}
[BeforeEach] [sig-network] Networking
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Feb 23 10:18:58.088: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename nettest
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 160 lines ...
test/e2e/network/framework.go:23
  Granular Checks: Services
  test/e2e/network/networking.go:150
    should update endpoints: http
    test/e2e/network/networking.go:333
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Services should update endpoints: http","total":-1,"completed":7,"skipped":192,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-api-machinery] Events
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 15 lines ...
  test/e2e/framework/framework.go:186
Feb 23 10:21:03.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-278" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":8,"skipped":201,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-network] DNS
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 24 lines ...
Feb 23 10:20:35.410: INFO: Unable to read jessie_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:35.422: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:35.433: INFO: Unable to read jessie_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:35.439: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:35.454: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:35.475: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:35.551: INFO: Lookups using dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1862 wheezy_tcp@dns-test-service.dns-1862 wheezy_udp@dns-test-service.dns-1862.svc wheezy_tcp@dns-test-service.dns-1862.svc wheezy_udp@_http._tcp.dns-test-service.dns-1862.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1862.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1862 jessie_tcp@dns-test-service.dns-1862 jessie_udp@dns-test-service.dns-1862.svc jessie_tcp@dns-test-service.dns-1862.svc jessie_udp@_http._tcp.dns-test-service.dns-1862.svc jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc]

Feb 23 10:20:40.558: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.563: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.569: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.573: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.577: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
... skipping 5 lines ...
Feb 23 10:20:40.642: INFO: Unable to read jessie_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.656: INFO: Unable to read jessie_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.663: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.669: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.686: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:40.721: INFO: Lookups using dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1862 wheezy_tcp@dns-test-service.dns-1862 wheezy_udp@dns-test-service.dns-1862.svc wheezy_tcp@dns-test-service.dns-1862.svc wheezy_udp@_http._tcp.dns-test-service.dns-1862.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1862.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1862 jessie_tcp@dns-test-service.dns-1862 jessie_udp@dns-test-service.dns-1862.svc jessie_tcp@dns-test-service.dns-1862.svc jessie_udp@_http._tcp.dns-test-service.dns-1862.svc jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc]

Feb 23 10:20:45.555: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.559: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.563: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.567: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.579: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
... skipping 5 lines ...
Feb 23 10:20:45.658: INFO: Unable to read jessie_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.664: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.668: INFO: Unable to read jessie_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.671: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.676: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.680: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:45.716: INFO: Lookups using dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1862 wheezy_tcp@dns-test-service.dns-1862 wheezy_udp@dns-test-service.dns-1862.svc wheezy_tcp@dns-test-service.dns-1862.svc wheezy_udp@_http._tcp.dns-test-service.dns-1862.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1862.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1862 jessie_tcp@dns-test-service.dns-1862 jessie_udp@dns-test-service.dns-1862.svc jessie_tcp@dns-test-service.dns-1862.svc jessie_udp@_http._tcp.dns-test-service.dns-1862.svc jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc]

Feb 23 10:20:50.557: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.561: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.566: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.579: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
... skipping 5 lines ...
Feb 23 10:20:50.634: INFO: Unable to read jessie_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.639: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.654: INFO: Unable to read jessie_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.659: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.663: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.675: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:50.712: INFO: Lookups using dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1862 wheezy_tcp@dns-test-service.dns-1862 wheezy_udp@dns-test-service.dns-1862.svc wheezy_tcp@dns-test-service.dns-1862.svc wheezy_udp@_http._tcp.dns-test-service.dns-1862.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1862.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1862 jessie_tcp@dns-test-service.dns-1862 jessie_udp@dns-test-service.dns-1862.svc jessie_tcp@dns-test-service.dns-1862.svc jessie_udp@_http._tcp.dns-test-service.dns-1862.svc jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc]

Feb 23 10:20:55.556: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.559: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.563: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.568: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
... skipping 5 lines ...
Feb 23 10:20:55.630: INFO: Unable to read jessie_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.634: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.639: INFO: Unable to read jessie_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.644: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.647: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.651: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:20:55.675: INFO: Lookups using dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1862 wheezy_tcp@dns-test-service.dns-1862 wheezy_udp@dns-test-service.dns-1862.svc wheezy_tcp@dns-test-service.dns-1862.svc wheezy_udp@_http._tcp.dns-test-service.dns-1862.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1862.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1862 jessie_tcp@dns-test-service.dns-1862 jessie_udp@dns-test-service.dns-1862.svc jessie_tcp@dns-test-service.dns-1862.svc jessie_udp@_http._tcp.dns-test-service.dns-1862.svc jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc]

Feb 23 10:21:00.555: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.558: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.565: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.568: INFO: Unable to read wheezy_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
... skipping 5 lines ...
Feb 23 10:21:00.620: INFO: Unable to read jessie_udp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.624: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862 from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.627: INFO: Unable to read jessie_udp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.631: INFO: Unable to read jessie_tcp@dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.636: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.639: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc from pod dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095: the server could not find the requested resource (get pods dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095)
Feb 23 10:21:00.666: INFO: Lookups using dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1862 wheezy_tcp@dns-test-service.dns-1862 wheezy_udp@dns-test-service.dns-1862.svc wheezy_tcp@dns-test-service.dns-1862.svc wheezy_udp@_http._tcp.dns-test-service.dns-1862.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1862.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1862 jessie_tcp@dns-test-service.dns-1862 jessie_udp@dns-test-service.dns-1862.svc jessie_tcp@dns-test-service.dns-1862.svc jessie_udp@_http._tcp.dns-test-service.dns-1862.svc jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc]

Feb 23 10:21:05.689: INFO: DNS probes using dns-1862/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 6 lines ...
• [SLOW TEST:40.949 seconds]
[sig-network] DNS
test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":18,"failed":0}

SSS
------------------------------
[BeforeEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
STEP: wait for the rc to be deleted
Feb 23 10:20:03.324: INFO: 0 pods remaining
Feb 23 10:20:03.324: INFO: 0 pods has nil DeletionTimestamp
Feb 23 10:20:03.324: INFO: 
STEP: Gathering metrics
W0223 10:20:04.129877   18713 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
Feb 23 10:21:06.194: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:186
Feb 23 10:21:06.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3480" for this suite.


• [SLOW TEST:69.672 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":8,"skipped":105,"failed":0}

SSSSSSS
------------------------------
[BeforeEach] [k8s.io] Docker Containers
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
• [SLOW TEST:12.093 seconds]
[k8s.io] Docker Containers
test/e2e/framework/framework.go:635
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":86,"failed":0}

SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 49 lines ...
• [SLOW TEST:17.083 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:640
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":7,"skipped":85,"failed":0}

SSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  test/e2e/kubectl/kubectl.go:247
[It] should check if cluster-info dump succeeds
  test/e2e/kubectl/kubectl.go:1084
STEP: running cluster-info dump
Feb 23 10:21:06.324: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-out/k8-fastbuild-ST-5e46445d989a/bin/cmd/kubectl/kubectl_/kubectl --server=https://[::1]:33611 --kubeconfig=/root/.kube/kind-test-config --namespace=kubectl-6522 cluster-info dump'
Feb 23 10:21:08.017: INFO: stderr: ""
Feb 23 10:21:08.034: INFO: stdout: "{\n    \"kind\": \"NodeList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12925\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kind-control-plane\",\n                \"uid\": \"930ce15c-8b58-41c9-b0b6-88b9947767f3\",\n                \"resourceVersion\": \"665\",\n                \"creationTimestamp\": \"2021-02-23T10:16:05Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-control-plane\",\n                    \"kubernetes.io/os\": \"linux\",\n                    \"node-role.kubernetes.io/control-plane\": \"\",\n                    \"node-role.kubernetes.io/master\": \"\"\n                },\n                \"annotations\": {\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"fd00:10:244::/64\",\n                \"podCIDRs\": [\n                    \"fd00:10:244::/64\"\n                ],\n                \"providerID\": \"kind://docker/kind/kind-control-plane\",\n                \"taints\": [\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ]\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"507944172Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53481648Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"507944172Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53481648Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:17:14Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:01Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:17:14Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:01Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:17:14Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:01Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:17:14Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:17:14Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"fc00:f853:ccd:e793::2\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-control-plane\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"4f134a41d44e4941909a529a4233b5e0\",\n                    \"systemUUID\": \"87a79e46-4661-48cd-8244-bc3b4bf69f7c\",\n                    \"bootID\": \"f29f7a52-aa6a-47f1-9898-e8a3f77b41f7\",\n                    \"kernelVersion\": \"5.4.0-1029-gke\",\n                    \"osImage\": \"Ubuntu 20.10\",\n                    \"containerRuntimeVersion\": \"containerd://1.5.0-beta.0-69-gb3f240206\",\n                    \"kubeletVersion\": \"v1.21.0-alpha.3.456+c7e85d33636431\",\n                    \"kubeProxyVersion\": \"v1.21.0-alpha.3.456+c7e85d33636431\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.13-0\"\n                        ],\n                        \"sizeBytes\": 254659261\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 171187989\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 161756958\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 137119862\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:v20210220-5b7e6d01\"\n                        ],\n                        \"sizeBytes\": 121784635\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 67149588\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/build-image/debian-base:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 53876619\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns:v1.8.0\"\n                        ],\n                        \"sizeBytes\": 42582495\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.14\"\n                        ],\n                        \"sizeBytes\": 41982521\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.3\"\n                        ],\n                        \"sizeBytes\": 685708\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker\",\n                \"uid\": \"4f1f3aeb-9ac1-43a1-a28a-6fd0d95eab5e\",\n                \"resourceVersion\": \"11394\",\n                \"creationTimestamp\": \"2021-02-23T10:16:35Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker\",\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"annotations\": {\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"fd00:10:244:2::/64\",\n                \"podCIDRs\": [\n                    \"fd00:10:244:2::/64\"\n                ],\n                \"providerID\": \"kind://docker/kind/kind-worker\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"507944172Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53481648Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"507944172Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53481648Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:20:35Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:35Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:20:35Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:35Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:20:35Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:35Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:20:35Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:17:05Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"fc00:f853:ccd:e793::3\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-worker\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"cd9f2f7ba2c74a669efe996ea62aa055\",\n                    \"systemUUID\": \"41230341-8b7c-4f1a-a436-43dd728ceb80\",\n                    \"bootID\": \"f29f7a52-aa6a-47f1-9898-e8a3f77b41f7\",\n                    \"kernelVersion\": \"5.4.0-1029-gke\",\n                    \"osImage\": \"Ubuntu 20.10\",\n                    \"containerRuntimeVersion\": \"containerd://1.5.0-beta.0-69-gb3f240206\",\n                    \"kubeletVersion\": \"v1.21.0-alpha.3.456+c7e85d33636431\",\n                    \"kubeProxyVersion\": \"v1.21.0-alpha.3.456+c7e85d33636431\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.13-0\"\n                        ],\n                        \"sizeBytes\": 254659261\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 171187989\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 161756958\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 137119862\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:v20210220-5b7e6d01\"\n                        ],\n                        \"sizeBytes\": 121784635\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n                        ],\n                        \"sizeBytes\": 112029652\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 67149588\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/build-image/debian-base:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 53876619\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.28\"\n                        ],\n                        \"sizeBytes\": 49210832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns:v1.8.0\"\n                        ],\n                        \"sizeBytes\": 42582495\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.14\"\n                        ],\n                        \"sizeBytes\": 41982521\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:ded922beab64b03ba02abe05cb8848b0121942638e7421134d466a82f7761caf\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-alpine\"\n                        ],\n                        \"sizeBytes\": 41902151\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:5fa11b592a35a7dd992a7a6540eb7935e0b386558e57fee863ce9af3960a5ed1\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-alpine\"\n                        ],\n                        \"sizeBytes\": 40764825\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b\",\n                            \"k8s.gcr.io/e2e-test-images/nonroot:1.1\"\n                        ],\n                        \"sizeBytes\": 17748448\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:dc81c9e400528c35e5b58d5208e7aece2b7452985009a44a341f42bebaabf07b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-alpine\"\n                        ],\n                        \"sizeBytes\": 6979199\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29\"\n                        ],\n                        \"sizeBytes\": 732569\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.3\"\n                        ],\n                        \"sizeBytes\": 685708\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    }\n                ]\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kind-worker2\",\n                \"uid\": \"e2501d1b-d1c6-4cbc-ab58-61f383023e0a\",\n                \"resourceVersion\": \"7185\",\n                \"creationTimestamp\": \"2021-02-23T10:16:34Z\",\n                \"labels\": {\n                    \"beta.kubernetes.io/arch\": \"amd64\",\n                    \"beta.kubernetes.io/os\": \"linux\",\n                    \"kubernetes.io/arch\": \"amd64\",\n                    \"kubernetes.io/hostname\": \"kind-worker2\",\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"annotations\": {\n                    \"kubeadm.alpha.kubernetes.io/cri-socket\": \"unix:///run/containerd/containerd.sock\",\n                    \"node.alpha.kubernetes.io/ttl\": \"0\",\n                    \"volumes.kubernetes.io/controller-managed-attach-detach\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"podCIDR\": \"fd00:10:244:1::/64\",\n                \"podCIDRs\": [\n                    \"fd00:10:244:1::/64\"\n                ],\n                \"providerID\": \"kind://docker/kind/kind-worker2\"\n            },\n            \"status\": {\n                \"capacity\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"507944172Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53481648Ki\",\n                    \"pods\": \"110\"\n                },\n                \"allocatable\": {\n                    \"cpu\": \"8\",\n                    \"ephemeral-storage\": \"507944172Ki\",\n                    \"hugepages-1Gi\": \"0\",\n                    \"hugepages-2Mi\": \"0\",\n                    \"memory\": \"53481648Ki\",\n                    \"pods\": \"110\"\n                },\n                \"conditions\": [\n                    {\n                        \"type\": \"MemoryPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:19:05Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:34Z\",\n                        \"reason\": \"KubeletHasSufficientMemory\",\n                        \"message\": \"kubelet has sufficient memory available\"\n                    },\n                    {\n                        \"type\": \"DiskPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:19:05Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:34Z\",\n                        \"reason\": \"KubeletHasNoDiskPressure\",\n                        \"message\": \"kubelet has no disk pressure\"\n                    },\n                    {\n                        \"type\": \"PIDPressure\",\n                        \"status\": \"False\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:19:05Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:34Z\",\n                        \"reason\": \"KubeletHasSufficientPID\",\n                        \"message\": \"kubelet has sufficient PID available\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastHeartbeatTime\": \"2021-02-23T10:19:05Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:17:04Z\",\n                        \"reason\": \"KubeletReady\",\n                        \"message\": \"kubelet is posting ready status\"\n                    }\n                ],\n                \"addresses\": [\n                    {\n                        \"type\": \"InternalIP\",\n                        \"address\": \"fc00:f853:ccd:e793::4\"\n                    },\n                    {\n                        \"type\": \"Hostname\",\n                        \"address\": \"kind-worker2\"\n                    }\n                ],\n                \"daemonEndpoints\": {\n                    \"kubeletEndpoint\": {\n                        \"Port\": 10250\n                    }\n                },\n                \"nodeInfo\": {\n                    \"machineID\": \"e8fef9e257334342bce90570e304d3a3\",\n                    \"systemUUID\": \"d4e29230-2897-4af8-9a1c-71bc692bd744\",\n                    \"bootID\": \"f29f7a52-aa6a-47f1-9898-e8a3f77b41f7\",\n                    \"kernelVersion\": \"5.4.0-1029-gke\",\n                    \"osImage\": \"Ubuntu 20.10\",\n                    \"containerRuntimeVersion\": \"containerd://1.5.0-beta.0-69-gb3f240206\",\n                    \"kubeletVersion\": \"v1.21.0-alpha.3.456+c7e85d33636431\",\n                    \"kubeProxyVersion\": \"v1.21.0-alpha.3.456+c7e85d33636431\",\n                    \"operatingSystem\": \"linux\",\n                    \"architecture\": \"amd64\"\n                },\n                \"images\": [\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/etcd:3.4.13-0\"\n                        ],\n                        \"sizeBytes\": 254659261\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-apiserver:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 171187989\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 161756958\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 137119862\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/kindest/kindnetd:v20210220-5b7e6d01\"\n                        ],\n                        \"sizeBytes\": 121784635\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89\",\n                            \"k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4\"\n                        ],\n                        \"sizeBytes\": 112029652\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/kube-scheduler:v1.21.0-alpha.3.456_c7e85d33636431\"\n                        ],\n                        \"sizeBytes\": 67149588\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/build-image/debian-base:v2.1.0\"\n                        ],\n                        \"sizeBytes\": 53876619\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/agnhost@sha256:4f3eec1e602e1e0b0e5858c8f3d399d0ef536799ec1f7666e025f211001cb706\",\n                            \"k8s.gcr.io/e2e-test-images/agnhost:2.28\"\n                        ],\n                        \"sizeBytes\": 49210832\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/coredns/coredns:v1.8.0\"\n                        ],\n                        \"sizeBytes\": 42582495\n                    },\n                    {\n                        \"names\": [\n                            \"docker.io/rancher/local-path-provisioner:v0.0.14\"\n                        ],\n                        \"sizeBytes\": 41982521\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:ded922beab64b03ba02abe05cb8848b0121942638e7421134d466a82f7761caf\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-alpine\"\n                        ],\n                        \"sizeBytes\": 41902151\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/httpd@sha256:5fa11b592a35a7dd992a7a6540eb7935e0b386558e57fee863ce9af3960a5ed1\",\n                            \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-alpine\"\n                        ],\n                        \"sizeBytes\": 40764825\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nginx@sha256:dc81c9e400528c35e5b58d5208e7aece2b7452985009a44a341f42bebaabf07b\",\n                            \"k8s.gcr.io/e2e-test-images/nginx:1.14-alpine\"\n                        ],\n                        \"sizeBytes\": 6979199\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac\",\n                            \"k8s.gcr.io/e2e-test-images/nonewprivs:1.3\"\n                        ],\n                        \"sizeBytes\": 3263463\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/e2e-test-images/busybox@sha256:d67d1eda84b05ebfc47290ba490aa2474caa11d90be6a5ef70da1b3f2ca2a2e7\",\n                            \"k8s.gcr.io/e2e-test-images/busybox:1.29\"\n                        ],\n                        \"sizeBytes\": 732569\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause:3.3\"\n                        ],\n                        \"sizeBytes\": 685708\n                    },\n                    {\n                        \"names\": [\n                            \"k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810\",\n                            \"k8s.gcr.io/pause:3.4.1\"\n                        ],\n                        \"sizeBytes\": 301268\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"EventList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12926\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-56kd2.1666590a6deec473\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d6e2dd14-f67d-4b58-8ebb-43fec38aeea9\",\n                \"resourceVersion\": \"595\",\n                \"creationTimestamp\": \"2021-02-23T10:16:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-56kd2\",\n                \"uid\": \"b6654480-cc3c-4c9b-8ccb-eef197ec69d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"390\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 3 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:04Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-56kd2.1666590d3a99a959\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"be237282-2bea-4bcf-aa51-127cdf0d64f7\",\n                \"resourceVersion\": \"611\",\n                \"creationTimestamp\": \"2021-02-23T10:17:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-56kd2\",\n                \"uid\": \"b6654480-cc3c-4c9b-8ccb-eef197ec69d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"526\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-85d9df8444-56kd2 to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:09Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:09Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-56kd2.1666590dd2dffa65\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6dc4f4bf-b208-49d4-b7b8-14993b8911c1\",\n                \"resourceVersion\": \"621\",\n                \"creationTimestamp\": \"2021-02-23T10:17:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-56kd2\",\n                \"uid\": \"b6654480-cc3c-4c9b-8ccb-eef197ec69d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"609\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/coredns/coredns:v1.8.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-56kd2.1666590dea7dbeed\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ecd346c6-d882-4673-8177-6de124a483c2\",\n                \"resourceVersion\": \"622\",\n                \"creationTimestamp\": \"2021-02-23T10:17:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-56kd2\",\n                \"uid\": \"b6654480-cc3c-4c9b-8ccb-eef197ec69d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"609\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-56kd2.1666590df0b74443\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"735a23bd-4cc6-4cff-a579-ab4caf70772b\",\n                \"resourceVersion\": \"623\",\n                \"creationTimestamp\": \"2021-02-23T10:17:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-56kd2\",\n                \"uid\": \"b6654480-cc3c-4c9b-8ccb-eef197ec69d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"609\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-56kd2.1666590e2d912516\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c50f306b-1522-45be-ab78-17ad72b6027c\",\n                \"resourceVersion\": \"633\",\n                \"creationTimestamp\": \"2021-02-23T10:17:13Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-56kd2\",\n                \"uid\": \"b6654480-cc3c-4c9b-8ccb-eef197ec69d1\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"609\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Readiness probe failed: HTTP probe failed with statuscode: 503\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:13Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:13Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-599l7.1666590a6ffeb162\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"f70394b1-c77b-404f-a8eb-b573c2468ec3\",\n                \"resourceVersion\": \"597\",\n                \"creationTimestamp\": \"2021-02-23T10:16:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-599l7\",\n                \"uid\": \"703eed7d-8557-45c1-992b-ed2155732cfc\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"393\"\n            },\n            \"reason\": \"FailedScheduling\",\n            \"message\": \"0/3 nodes are available: 3 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:04Z\",\n            \"count\": 3,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-599l7.1666590d3a1cbe23\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"1635958e-c2b6-481d-ad76-e79c8aba4b14\",\n                \"resourceVersion\": \"610\",\n                \"creationTimestamp\": \"2021-02-23T10:17:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-599l7\",\n                \"uid\": \"703eed7d-8557-45c1-992b-ed2155732cfc\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"529\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/coredns-85d9df8444-599l7 to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:09Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:09Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-599l7.1666590dac14a797\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7b05594c-3b26-45f5-87e0-7dba94ae8927\",\n                \"resourceVersion\": \"616\",\n                \"creationTimestamp\": \"2021-02-23T10:17:11Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-599l7\",\n                \"uid\": \"703eed7d-8557-45c1-992b-ed2155732cfc\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"608\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/coredns/coredns:v1.8.0\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:11Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:11Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-599l7.1666590dc6f64f71\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2b3bb40e-48e7-4169-9e69-711ed87e458a\",\n                \"resourceVersion\": \"619\",\n                \"creationTimestamp\": \"2021-02-23T10:17:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-599l7\",\n                \"uid\": \"703eed7d-8557-45c1-992b-ed2155732cfc\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"608\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-599l7.1666590dcccf072e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3b2801c0-04e7-4e27-9c67-4c4b8e04836c\",\n                \"resourceVersion\": \"620\",\n                \"creationTimestamp\": \"2021-02-23T10:17:12Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-599l7\",\n                \"uid\": \"703eed7d-8557-45c1-992b-ed2155732cfc\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"608\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container coredns\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:12Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-599l7.16665926c513a32f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"08a5e7fa-5a14-496c-90da-060a807649e3\",\n                \"resourceVersion\": \"6840\",\n                \"creationTimestamp\": \"2021-02-23T10:18:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444-599l7\",\n                \"uid\": \"703eed7d-8557-45c1-992b-ed2155732cfc\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"608\",\n                \"fieldPath\": \"spec.containers{coredns}\"\n            },\n            \"reason\": \"Unhealthy\",\n            \"message\": \"Readiness probe failed: Get \\\"http://[fd00:10:244:1::2]:8181/ready\\\": dial tcp [fd00:10:244:1::2]:8181: connect: connection refused\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:18:59Z\",\n            \"lastTimestamp\": \"2021-02-23T10:18:59Z\",\n            \"count\": 1,\n            \"type\": \"Warning\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444.166659028beb511f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"06fc06a7-b04f-4a26-aeea-e5549f395849\",\n                \"resourceVersion\": \"391\",\n                \"creationTimestamp\": \"2021-02-23T10:16:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444\",\n                \"uid\": \"10220068-aaf4-4ec5-9a70-c3c696b3cd71\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"381\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-85d9df8444-56kd2\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:23Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444.166659028c4ce822\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a9f3ea20-c4eb-4dd6-b0ae-7dfb54fcf782\",\n                \"resourceVersion\": \"399\",\n                \"creationTimestamp\": \"2021-02-23T10:16:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"ReplicaSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns-85d9df8444\",\n                \"uid\": \"10220068-aaf4-4ec5-9a70-c3c696b3cd71\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"381\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: coredns-85d9df8444-599l7\",\n            \"source\": {\n                \"component\": \"replicaset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:23Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns.166659028a857b77\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4239410c-93fb-4b5e-91fd-24fe5f1973c2\",\n                \"resourceVersion\": \"384\",\n                \"creationTimestamp\": \"2021-02-23T10:16:23Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Deployment\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"coredns\",\n                \"uid\": \"ff684d26-a57b-4801-bb1e-7b8b9d7b817a\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"224\"\n            },\n            \"reason\": \"ScalingReplicaSet\",\n            \"message\": \"Scaled up replica set coredns-85d9df8444 to 2\",\n            \"source\": {\n                \"component\": \"deployment-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:23Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:23Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6nfp4.1666590a73b86999\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"32dcc195-1323-4dcf-b620-a3da55a5bc9b\",\n                \"resourceVersion\": \"549\",\n                \"creationTimestamp\": \"2021-02-23T10:16:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-6nfp4\",\n                \"uid\": \"aae54bf6-d4bf-4ac8-bd03-2bf5ea524b1e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"424\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-6nfp4 to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6nfp4.1666590aa4a235c1\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a5626197-d0e9-4340-b388-1b58737edd63\",\n                \"resourceVersion\": \"554\",\n                \"creationTimestamp\": \"2021-02-23T10:16:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-6nfp4\",\n                \"uid\": \"aae54bf6-d4bf-4ac8-bd03-2bf5ea524b1e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"537\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"kindest/kindnetd:v20210220-5b7e6d01\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6nfp4.1666590b6b8c9275\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c1dfe578-6c5f-4cfb-beab-d64a8ba3463f\",\n                \"resourceVersion\": \"564\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-6nfp4\",\n                \"uid\": \"aae54bf6-d4bf-4ac8-bd03-2bf5ea524b1e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"537\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6nfp4.1666590b7de83f98\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c9ac40d7-40dd-4367-b283-eab1e1eab414\",\n                \"resourceVersion\": \"572\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-6nfp4\",\n                \"uid\": \"aae54bf6-d4bf-4ac8-bd03-2bf5ea524b1e\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"537\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-9cqk8.1666590a72697e55\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"9380b3cb-f0b9-41b6-8539-4dfa4f4b3748\",\n                \"resourceVersion\": \"546\",\n                \"creationTimestamp\": \"2021-02-23T10:16:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-9cqk8\",\n                \"uid\": \"59d6686c-0ff3-4d5e-9fe7-be8ea0501f9c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"486\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-9cqk8 to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-9cqk8.1666590ab6f06143\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5f254e2b-386d-40e9-b0f2-9a2222d44afa\",\n                \"resourceVersion\": \"556\",\n                \"creationTimestamp\": \"2021-02-23T10:16:59Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-9cqk8\",\n                \"uid\": \"59d6686c-0ff3-4d5e-9fe7-be8ea0501f9c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"534\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"kindest/kindnetd:v20210220-5b7e6d01\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:59Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:59Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-9cqk8.1666590b6bcf90b7\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5d05075b-7b1a-4934-9650-9b12b4d6f947\",\n                \"resourceVersion\": \"567\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-9cqk8\",\n                \"uid\": \"59d6686c-0ff3-4d5e-9fe7-be8ea0501f9c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"534\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-9cqk8.1666590b7d181ff8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"704fe2c5-263b-4a9a-a64f-1a9ae8383fc1\",\n                \"resourceVersion\": \"568\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-9cqk8\",\n                \"uid\": \"59d6686c-0ff3-4d5e-9fe7-be8ea0501f9c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"534\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-p2pbg.1666590a73b1ee9f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"212b60cb-0392-48e8-8dca-785670b13b06\",\n                \"resourceVersion\": \"548\",\n                \"creationTimestamp\": \"2021-02-23T10:16:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-p2pbg\",\n                \"uid\": \"a82c4f96-81f7-4d4d-b92f-98656fd6c2fd\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"465\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kindnet-p2pbg to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-p2pbg.1666590aa51aea95\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c203057e-1049-402b-9604-056206f80257\",\n                \"resourceVersion\": \"555\",\n                \"creationTimestamp\": \"2021-02-23T10:16:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-p2pbg\",\n                \"uid\": \"a82c4f96-81f7-4d4d-b92f-98656fd6c2fd\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"538\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"kindest/kindnetd:v20210220-5b7e6d01\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-p2pbg.1666590b6bace7cc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d907a43b-b312-459c-99c8-95655b31ed07\",\n                \"resourceVersion\": \"563\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-p2pbg\",\n                \"uid\": \"a82c4f96-81f7-4d4d-b92f-98656fd6c2fd\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"538\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-p2pbg.1666590b7d30be70\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ae3b5ae8-6acb-4444-8673-93d18e9d1404\",\n                \"resourceVersion\": \"569\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet-p2pbg\",\n                \"uid\": \"a82c4f96-81f7-4d4d-b92f-98656fd6c2fd\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"538\",\n                \"fieldPath\": \"spec.containers{kindnet-cni}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kindnet-cni\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.16665902947e96bf\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"68158705-e6f3-48ae-a930-0020e859868c\",\n                \"resourceVersion\": \"432\",\n                \"creationTimestamp\": \"2021-02-23T10:16:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"e5385763-f42c-4247-8a3d-88a0b31491f1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"282\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-6nfp4\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:24Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.1666590518aee6db\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b453cfda-fe67-42e8-9193-1f72a228eb09\",\n                \"resourceVersion\": \"470\",\n                \"creationTimestamp\": \"2021-02-23T10:16:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"e5385763-f42c-4247-8a3d-88a0b31491f1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"436\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-p2pbg\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:34Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet.1666590523ca023e\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2e1f9f98-6506-4ce7-b0a1-089221e5575a\",\n                \"resourceVersion\": \"490\",\n                \"creationTimestamp\": \"2021-02-23T10:16:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kindnet\",\n                \"uid\": \"e5385763-f42c-4247-8a3d-88a0b31491f1\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"472\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kindnet-9cqk8\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:35Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager.166658ff3576aa33\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bedf8bfb-b171-48ad-845e-b5a87210a045\",\n                \"resourceVersion\": \"234\",\n                \"creationTimestamp\": \"2021-02-23T10:16:09Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-controller-manager\",\n                \"uid\": \"883fb6b9-3448-46e1-a01e-0303dc3886f8\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"233\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_f01b5d86-a8d6-482f-8f91-87df94d619d4 became leader\",\n            \"source\": {\n                \"component\": \"kube-controller-manager\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:09Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:09Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cfsxs.166659156287e2b2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"efb03c52-673e-4724-a38b-e900ce25b4c7\",\n                \"resourceVersion\": \"770\",\n                \"creationTimestamp\": \"2021-02-23T10:17:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cfsxs\",\n                \"uid\": \"06f0a0b3-26e8-4de6-91ce-3c6845586bde\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"766\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-cfsxs to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:44Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cfsxs.166659157e2dc64f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b72d8842-6060-4349-874c-0fae37c6e00e\",\n                \"resourceVersion\": \"773\",\n                \"creationTimestamp\": \"2021-02-23T10:17:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cfsxs\",\n                \"uid\": \"06f0a0b3-26e8-4de6-91ce-3c6845586bde\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"767\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:45Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cfsxs.1666591580cb5b80\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ef6655e5-ea9f-43d3-961d-8058687ca6dd\",\n                \"resourceVersion\": \"774\",\n                \"creationTimestamp\": \"2021-02-23T10:17:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cfsxs\",\n                \"uid\": \"06f0a0b3-26e8-4de6-91ce-3c6845586bde\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"767\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:45Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cfsxs.1666591589371d45\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"618bf103-8d1d-4c84-9f24-290ddb69d0ff\",\n                \"resourceVersion\": \"775\",\n                \"creationTimestamp\": \"2021-02-23T10:17:45Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cfsxs\",\n                \"uid\": \"06f0a0b3-26e8-4de6-91ce-3c6845586bde\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"767\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:45Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:45Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwd7s.1666590a6ee29ebb\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"7246efbd-013f-4c74-a608-a59e5fdf831e\",\n                \"resourceVersion\": \"536\",\n                \"creationTimestamp\": \"2021-02-23T10:16:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cwd7s\",\n                \"uid\": \"b4778b9b-e195-4e8b-8065-1bb0bac26ea5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"422\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-cwd7s to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwd7s.1666590a8cf8e7cc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"85121f2e-666d-4546-b9cd-085e095c0496\",\n                \"resourceVersion\": \"551\",\n                \"creationTimestamp\": \"2021-02-23T10:16:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cwd7s\",\n                \"uid\": \"b4778b9b-e195-4e8b-8065-1bb0bac26ea5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"524\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwd7s.1666590b6b6e1e4a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2b94124a-266b-425c-af9c-2a5bc19c5bec\",\n                \"resourceVersion\": \"562\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cwd7s\",\n                \"uid\": \"b4778b9b-e195-4e8b-8065-1bb0bac26ea5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"524\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwd7s.1666590b7eadfedc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a8f29117-e94e-4ba2-9b14-6bbbd112feaa\",\n                \"resourceVersion\": \"573\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cwd7s\",\n                \"uid\": \"b4778b9b-e195-4e8b-8065-1bb0bac26ea5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"524\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cwd7s.1666590eb4d10ff8\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5a2f29db-0c0b-4ca6-b0bb-90c8df52cb11\",\n                \"resourceVersion\": \"650\",\n                \"creationTimestamp\": \"2021-02-23T10:17:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-cwd7s\",\n                \"uid\": \"b4778b9b-e195-4e8b-8065-1bb0bac26ea5\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"524\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:16Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-gv4kg.16665910b119f9bc\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2790ab67-fa7d-4b35-bf57-562ecc948447\",\n                \"resourceVersion\": \"692\",\n                \"creationTimestamp\": \"2021-02-23T10:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-gv4kg\",\n                \"uid\": \"04827eed-042f-43d7-a3b9-0aec10925371\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"688\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-gv4kg to kind-control-plane\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:24Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-gv4kg.16665910d2c07b5b\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"39d73ff3-92cd-4db1-9f2b-6449c35dd588\",\n                \"resourceVersion\": \"696\",\n                \"creationTimestamp\": \"2021-02-23T10:17:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-gv4kg\",\n                \"uid\": \"04827eed-042f-43d7-a3b9-0aec10925371\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"690\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-gv4kg.16665910d79b413a\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d8b00b97-19b9-4123-aa50-218069cad212\",\n                \"resourceVersion\": \"697\",\n                \"creationTimestamp\": \"2021-02-23T10:17:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-gv4kg\",\n                \"uid\": \"04827eed-042f-43d7-a3b9-0aec10925371\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"690\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-gv4kg.16665910e51d6878\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"d519e542-ea2b-4f03-932b-728cdd1ed2f2\",\n                \"resourceVersion\": \"698\",\n                \"creationTimestamp\": \"2021-02-23T10:17:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-gv4kg\",\n                \"uid\": \"04827eed-042f-43d7-a3b9-0aec10925371\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"690\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-control-plane\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-l7fjj.1666590a6eb95209\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"5bbdd405-90f4-445b-ad8b-5ad401ee8efc\",\n                \"resourceVersion\": \"527\",\n                \"creationTimestamp\": \"2021-02-23T10:16:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-l7fjj\",\n                \"uid\": \"87ab144e-cfb2-4779-b116-383193b24317\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"485\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-l7fjj to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-l7fjj.1666590a8ddfb6d0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3bc26ba5-8886-4cf4-bf7c-1bb4f2f48ead\",\n                \"resourceVersion\": \"552\",\n                \"creationTimestamp\": \"2021-02-23T10:16:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-l7fjj\",\n                \"uid\": \"87ab144e-cfb2-4779-b116-383193b24317\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"522\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-l7fjj.1666590b6bb4858f\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ca77b929-bd7e-4e34-96fd-026bf23ebd6d\",\n                \"resourceVersion\": \"566\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-l7fjj\",\n                \"uid\": \"87ab144e-cfb2-4779-b116-383193b24317\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"522\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-l7fjj.1666590b7d388a45\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"34770868-5d1e-40d8-8b5b-fcba0d848f67\",\n                \"resourceVersion\": \"570\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-l7fjj\",\n                \"uid\": \"87ab144e-cfb2-4779-b116-383193b24317\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"522\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-l7fjj.16665910f7c147d9\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"26b1252a-5e97-46c9-9a83-dcf71a13dbdb\",\n                \"resourceVersion\": \"703\",\n                \"creationTimestamp\": \"2021-02-23T10:17:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-l7fjj\",\n                \"uid\": \"87ab144e-cfb2-4779-b116-383193b24317\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"522\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-mg2kq.166659131cbf61d0\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bd7f00d2-2c4b-4c3c-adbd-a38f31a5f7f4\",\n                \"resourceVersion\": \"732\",\n                \"creationTimestamp\": \"2021-02-23T10:17:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-mg2kq\",\n                \"uid\": \"751c0eee-e925-48c4-98c6-4ab70cfa1d9c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"728\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-mg2kq to kind-worker\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-mg2kq.1666591337589ca2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8442f6e0-d8b9-4b25-b17b-fafd093bda3c\",\n                \"resourceVersion\": \"734\",\n                \"creationTimestamp\": \"2021-02-23T10:17:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-mg2kq\",\n                \"uid\": \"751c0eee-e925-48c4-98c6-4ab70cfa1d9c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"729\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-mg2kq.166659133a2feea4\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c7bc6fc9-1104-4e5f-837e-681f1fb9c3d7\",\n                \"resourceVersion\": \"735\",\n                \"creationTimestamp\": \"2021-02-23T10:17:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-mg2kq\",\n                \"uid\": \"751c0eee-e925-48c4-98c6-4ab70cfa1d9c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"729\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-mg2kq.16665913417d7165\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"3e4ecf5d-ffcb-40bc-bf7a-e498dadad4a1\",\n                \"resourceVersion\": \"736\",\n                \"creationTimestamp\": \"2021-02-23T10:17:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-mg2kq\",\n                \"uid\": \"751c0eee-e925-48c4-98c6-4ab70cfa1d9c\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"729\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-wc7sz.1666590a6ee0708c\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"86252d93-e4dc-4fa3-b703-2e1fb725b7ea\",\n                \"resourceVersion\": \"530\",\n                \"creationTimestamp\": \"2021-02-23T10:16:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-wc7sz\",\n                \"uid\": \"9a10786c-af01-46f1-827e-f603af9d8004\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"464\"\n            },\n            \"reason\": \"Scheduled\",\n            \"message\": \"Successfully assigned kube-system/kube-proxy-wc7sz to kind-worker2\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-wc7sz.1666590a8e1ea893\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ba49f46d-1982-4472-897c-c8857d770cdc\",\n                \"resourceVersion\": \"553\",\n                \"creationTimestamp\": \"2021-02-23T10:16:58Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-wc7sz\",\n                \"uid\": \"9a10786c-af01-46f1-827e-f603af9d8004\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"523\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Pulled\",\n            \"message\": \"Container image \\\"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\\\" already present on machine\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:58Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-wc7sz.1666590b6bcf85df\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"2b9085ff-7963-497d-9535-0bd00a15dd1b\",\n                \"resourceVersion\": \"565\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-wc7sz\",\n                \"uid\": \"9a10786c-af01-46f1-827e-f603af9d8004\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"523\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Created\",\n            \"message\": \"Created container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-wc7sz.1666590b7d348212\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b5ec0281-9b3a-482f-b5f6-b053290c09a7\",\n                \"resourceVersion\": \"571\",\n                \"creationTimestamp\": \"2021-02-23T10:17:02Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-wc7sz\",\n                \"uid\": \"9a10786c-af01-46f1-827e-f603af9d8004\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"523\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Started\",\n            \"message\": \"Started container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:02Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-wc7sz.166659135ff267af\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a4c3a28d-62b8-4cbe-9ae6-fe44a0f56400\",\n                \"resourceVersion\": \"745\",\n                \"creationTimestamp\": \"2021-02-23T10:17:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Pod\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy-wc7sz\",\n                \"uid\": \"9a10786c-af01-46f1-827e-f603af9d8004\",\n                \"apiVersion\": \"v1\",\n                \"resourceVersion\": \"523\",\n                \"fieldPath\": \"spec.containers{kube-proxy}\"\n            },\n            \"reason\": \"Killing\",\n            \"message\": \"Stopping container kube-proxy\",\n            \"source\": {\n                \"component\": \"kubelet\",\n                \"host\": \"kind-worker2\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:36Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.1666590294559d34\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"66c7fc3c-fcea-4583-aa0f-0f4c885ba69e\",\n                \"resourceVersion\": \"426\",\n                \"creationTimestamp\": \"2021-02-23T10:16:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"229\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-cwd7s\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:24Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.16665905186fe37d\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"8f170947-4ddb-45aa-892d-76dbc0596492\",\n                \"resourceVersion\": \"467\",\n                \"creationTimestamp\": \"2021-02-23T10:16:34Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"427\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-wc7sz\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:34Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:34Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.1666590523c9e720\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ce43a77d-ce32-469d-9e63-2159ab99e17b\",\n                \"resourceVersion\": \"488\",\n                \"creationTimestamp\": \"2021-02-23T10:16:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"469\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-l7fjj\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:35Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.1666590eb4c9bd04\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"bcaea06f-8476-44f7-ab41-f021e6c74484\",\n                \"resourceVersion\": \"648\",\n                \"creationTimestamp\": \"2021-02-23T10:17:16Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"645\"\n            },\n            \"reason\": \"SuccessfulDelete\",\n            \"message\": \"Deleted pod: kube-proxy-cwd7s\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:16Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:16Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.16665910b0a24d83\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"6437ed85-ed91-4d73-a41a-0c3f1aee3aad\",\n                \"resourceVersion\": \"689\",\n                \"creationTimestamp\": \"2021-02-23T10:17:24Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"655\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-gv4kg\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:24Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:24Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.16665910f7c0e9ec\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"844aee25-c79c-462f-a0ee-47175d1007f5\",\n                \"resourceVersion\": \"704\",\n                \"creationTimestamp\": \"2021-02-23T10:17:25Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"691\"\n            },\n            \"reason\": \"SuccessfulDelete\",\n            \"message\": \"Deleted pod: kube-proxy-l7fjj\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:25Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.166659131c54b7b2\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"41ba2d45-9abb-44f7-b23e-8995bbe2d7b5\",\n                \"resourceVersion\": \"730\",\n                \"creationTimestamp\": \"2021-02-23T10:17:35Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"709\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-mg2kq\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:35Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.166659135fe4d697\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"03a9e792-78fc-4091-af8f-178fdd9b1de8\",\n                \"resourceVersion\": \"743\",\n                \"creationTimestamp\": \"2021-02-23T10:17:36Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"731\"\n            },\n            \"reason\": \"SuccessfulDelete\",\n            \"message\": \"Deleted pod: kube-proxy-wc7sz\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:36Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:36Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy.166659156232c074\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"c3c225ab-caec-47fc-aea7-0f6d972da5a3\",\n                \"resourceVersion\": \"768\",\n                \"creationTimestamp\": \"2021-02-23T10:17:44Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"DaemonSet\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-proxy\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"apiVersion\": \"apps/v1\",\n                \"resourceVersion\": \"748\"\n            },\n            \"reason\": \"SuccessfulCreate\",\n            \"message\": \"Created pod: kube-proxy-cfsxs\",\n            \"source\": {\n                \"component\": \"daemonset-controller\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:17:44Z\",\n            \"lastTimestamp\": \"2021-02-23T10:17:44Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler.1666590a6db34d02\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"877d4229-e67f-4cd8-8718-2161891b23da\",\n                \"resourceVersion\": \"521\",\n                \"creationTimestamp\": \"2021-02-23T10:16:57Z\"\n            },\n            \"involvedObject\": {\n                \"kind\": \"Lease\",\n                \"namespace\": \"kube-system\",\n                \"name\": \"kube-scheduler\",\n                \"uid\": \"386f810e-c591-4328-9f7e-f492eb70bae3\",\n                \"apiVersion\": \"coordination.k8s.io/v1\",\n                \"resourceVersion\": \"520\"\n            },\n            \"reason\": \"LeaderElection\",\n            \"message\": \"kind-control-plane_58d8c9d3-a6f9-4cac-9992-66b74af9c1f1 became leader\",\n            \"source\": {\n                \"component\": \"default-scheduler\"\n            },\n            \"firstTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"lastTimestamp\": \"2021-02-23T10:16:57Z\",\n            \"count\": 1,\n            \"type\": \"Normal\",\n            \"eventTime\": null,\n            \"reportingComponent\": \"\",\n            \"reportingInstance\": \"\"\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicationControllerList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12926\"\n    },\n    \"items\": []\n}\n{\n    \"kind\": \"ServiceList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12927\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kube-dns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b1331bcb-affd-4d2f-b2b7-db4af4e83ab4\",\n                \"resourceVersion\": \"226\",\n                \"creationTimestamp\": \"2021-02-23T10:16:09Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"kubernetes.io/cluster-service\": \"true\",\n                    \"kubernetes.io/name\": \"KubeDNS\"\n                },\n                \"annotations\": {\n                    \"prometheus.io/port\": \"9153\",\n                    \"prometheus.io/scrape\": \"true\"\n                }\n            },\n            \"spec\": {\n                \"ports\": [\n                    {\n                        \"name\": \"dns\",\n                        \"protocol\": \"UDP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"dns-tcp\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 53,\n                        \"targetPort\": 53\n                    },\n                    {\n                        \"name\": \"metrics\",\n                        \"protocol\": \"TCP\",\n                        \"port\": 9153,\n                        \"targetPort\": 9153\n                    }\n                ],\n                \"selector\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"clusterIP\": \"fd00:10:96::a\",\n                \"clusterIPs\": [\n                    \"fd00:10:96::a\"\n                ],\n                \"type\": \"ClusterIP\",\n                \"sessionAffinity\": \"None\",\n                \"ipFamilies\": [\n                    \"IPv6\"\n                ],\n                \"ipFamilyPolicy\": \"SingleStack\"\n            },\n            \"status\": {\n                \"loadBalancer\": {}\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DaemonSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12927\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"kindnet\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"e5385763-f42c-4247-8a3d-88a0b31491f1\",\n                \"resourceVersion\": \"586\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-02-23T10:16:10Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"k8s-app\": \"kindnet\",\n                    \"tier\": \"node\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"1\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"app\": \"kindnet\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"app\": \"kindnet\",\n                            \"k8s-app\": \"kindnet\",\n                            \"tier\": \"node\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"hostPath\": {\n                                    \"path\": \"/etc/cni/net.d\",\n                                    \"type\": \"\"\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kindnet-cni\",\n                                \"image\": \"kindest/kindnetd:v20210220-5b7e6d01\",\n                                \"env\": [\n                                    {\n                                        \"name\": \"HOST_IP\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"status.hostIP\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_IP\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"status.podIP\"\n                                            }\n                                        }\n                                    },\n                                    {\n                                        \"name\": \"POD_SUBNET\",\n                                        \"value\": \"fd00:10:244::/56\"\n                                    },\n                                    {\n                                        \"name\": \"CONTROL_PLANE_ENDPOINT\",\n                                        \"value\": \"kind-control-plane:6443\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"50Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"50Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"cni-cfg\",\n                                        \"mountPath\": \"/etc/cni/net.d\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_RAW\",\n                                            \"NET_ADMIN\"\n                                        ]\n                                    },\n                                    \"privileged\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"serviceAccountName\": \"kindnet\",\n                        \"serviceAccount\": \"kindnet\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"operator\": \"Exists\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ]\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": 0\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 3,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 3,\n                \"numberReady\": 3,\n                \"observedGeneration\": 1,\n                \"updatedNumberScheduled\": 3,\n                \"numberAvailable\": 3\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                \"resourceVersion\": \"780\",\n                \"generation\": 2,\n                \"creationTimestamp\": \"2021-02-23T10:16:09Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-proxy\"\n                },\n                \"annotations\": {\n                    \"deprecated.daemonset.template.generation\": \"2\"\n                }\n            },\n            \"spec\": {\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-proxy\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-proxy\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"configMap\": {\n                                    \"name\": \"kube-proxy\",\n                                    \"defaultMode\": 420\n                                }\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"hostPath\": {\n                                    \"path\": \"/run/xtables.lock\",\n                                    \"type\": \"FileOrCreate\"\n                                }\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"hostPath\": {\n                                    \"path\": \"/lib/modules\",\n                                    \"type\": \"\"\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"image\": \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\",\n                                \"command\": [\n                                    \"/usr/local/bin/kube-proxy\",\n                                    \"--config=/var/lib/kube-proxy/config.conf\",\n                                    \"--hostname-override=$(NODE_NAME)\",\n                                    \"--v=4\"\n                                ],\n                                \"env\": [\n                                    {\n                                        \"name\": \"NODE_NAME\",\n                                        \"valueFrom\": {\n                                            \"fieldRef\": {\n                                                \"apiVersion\": \"v1\",\n                                                \"fieldPath\": \"spec.nodeName\"\n                                            }\n                                        }\n                                    }\n                                ],\n                                \"resources\": {},\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"kube-proxy\",\n                                        \"mountPath\": \"/var/lib/kube-proxy\"\n                                    },\n                                    {\n                                        \"name\": \"xtables-lock\",\n                                        \"mountPath\": \"/run/xtables.lock\"\n                                    },\n                                    {\n                                        \"name\": \"lib-modules\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/lib/modules\"\n                                    }\n                                ],\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"privileged\": true\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"ClusterFirst\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"kube-proxy\",\n                        \"serviceAccount\": \"kube-proxy\",\n                        \"hostNetwork\": true,\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"operator\": \"Exists\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-node-critical\"\n                    }\n                },\n                \"updateStrategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": 0\n                    }\n                },\n                \"revisionHistoryLimit\": 10\n            },\n            \"status\": {\n                \"currentNumberScheduled\": 3,\n                \"numberMisscheduled\": 0,\n                \"desiredNumberScheduled\": 3,\n                \"numberReady\": 3,\n                \"observedGeneration\": 2,\n                \"updatedNumberScheduled\": 3,\n                \"numberAvailable\": 3\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"DeploymentList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12927\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"ff684d26-a57b-4801-bb1e-7b8b9d7b817a\",\n                \"resourceVersion\": \"682\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-02-23T10:16:09Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                }\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"effect\": \"NoSchedule\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/control-plane\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                },\n                \"strategy\": {\n                    \"type\": \"RollingUpdate\",\n                    \"rollingUpdate\": {\n                        \"maxUnavailable\": 1,\n                        \"maxSurge\": \"25%\"\n                    }\n                },\n                \"revisionHistoryLimit\": 10,\n                \"progressDeadlineSeconds\": 600\n            },\n            \"status\": {\n                \"observedGeneration\": 1,\n                \"replicas\": 2,\n                \"updatedReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"conditions\": [\n                    {\n                        \"type\": \"Available\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-02-23T10:17:19Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:17:19Z\",\n                        \"reason\": \"MinimumReplicasAvailable\",\n                        \"message\": \"Deployment has minimum availability.\"\n                    },\n                    {\n                        \"type\": \"Progressing\",\n                        \"status\": \"True\",\n                        \"lastUpdateTime\": \"2021-02-23T10:17:23Z\",\n                        \"lastTransitionTime\": \"2021-02-23T10:16:23Z\",\n                        \"reason\": \"NewReplicaSetAvailable\",\n                        \"message\": \"ReplicaSet \\\"coredns-85d9df8444\\\" has successfully progressed.\"\n                    }\n                ]\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"ReplicaSetList\",\n    \"apiVersion\": \"apps/v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12927\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"10220068-aaf4-4ec5-9a70-c3c696b3cd71\",\n                \"resourceVersion\": \"680\",\n                \"generation\": 1,\n                \"creationTimestamp\": \"2021-02-23T10:16:23Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"85d9df8444\"\n                },\n                \"annotations\": {\n                    \"deployment.kubernetes.io/desired-replicas\": \"2\",\n                    \"deployment.kubernetes.io/max-replicas\": \"3\",\n                    \"deployment.kubernetes.io/revision\": \"1\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"Deployment\",\n                        \"name\": \"coredns\",\n                        \"uid\": \"ff684d26-a57b-4801-bb1e-7b8b9d7b817a\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"replicas\": 2,\n                \"selector\": {\n                    \"matchLabels\": {\n                        \"k8s-app\": \"kube-dns\",\n                        \"pod-template-hash\": \"85d9df8444\"\n                    }\n                },\n                \"template\": {\n                    \"metadata\": {\n                        \"creationTimestamp\": null,\n                        \"labels\": {\n                            \"k8s-app\": \"kube-dns\",\n                            \"pod-template-hash\": \"85d9df8444\"\n                        }\n                    },\n                    \"spec\": {\n                        \"volumes\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"configMap\": {\n                                    \"name\": \"coredns\",\n                                    \"items\": [\n                                        {\n                                            \"key\": \"Corefile\",\n                                            \"path\": \"Corefile\"\n                                        }\n                                    ],\n                                    \"defaultMode\": 420\n                                }\n                            }\n                        ],\n                        \"containers\": [\n                            {\n                                \"name\": \"coredns\",\n                                \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n                                \"args\": [\n                                    \"-conf\",\n                                    \"/etc/coredns/Corefile\"\n                                ],\n                                \"ports\": [\n                                    {\n                                        \"name\": \"dns\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"UDP\"\n                                    },\n                                    {\n                                        \"name\": \"dns-tcp\",\n                                        \"containerPort\": 53,\n                                        \"protocol\": \"TCP\"\n                                    },\n                                    {\n                                        \"name\": \"metrics\",\n                                        \"containerPort\": 9153,\n                                        \"protocol\": \"TCP\"\n                                    }\n                                ],\n                                \"resources\": {\n                                    \"limits\": {\n                                        \"memory\": \"170Mi\"\n                                    },\n                                    \"requests\": {\n                                        \"cpu\": \"100m\",\n                                        \"memory\": \"70Mi\"\n                                    }\n                                },\n                                \"volumeMounts\": [\n                                    {\n                                        \"name\": \"config-volume\",\n                                        \"readOnly\": true,\n                                        \"mountPath\": \"/etc/coredns\"\n                                    }\n                                ],\n                                \"livenessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/health\",\n                                        \"port\": 8080,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"initialDelaySeconds\": 60,\n                                    \"timeoutSeconds\": 5,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 5\n                                },\n                                \"readinessProbe\": {\n                                    \"httpGet\": {\n                                        \"path\": \"/ready\",\n                                        \"port\": 8181,\n                                        \"scheme\": \"HTTP\"\n                                    },\n                                    \"timeoutSeconds\": 1,\n                                    \"periodSeconds\": 10,\n                                    \"successThreshold\": 1,\n                                    \"failureThreshold\": 3\n                                },\n                                \"terminationMessagePath\": \"/dev/termination-log\",\n                                \"terminationMessagePolicy\": \"File\",\n                                \"imagePullPolicy\": \"IfNotPresent\",\n                                \"securityContext\": {\n                                    \"capabilities\": {\n                                        \"add\": [\n                                            \"NET_BIND_SERVICE\"\n                                        ],\n                                        \"drop\": [\n                                            \"all\"\n                                        ]\n                                    },\n                                    \"readOnlyRootFilesystem\": true,\n                                    \"allowPrivilegeEscalation\": false\n                                }\n                            }\n                        ],\n                        \"restartPolicy\": \"Always\",\n                        \"terminationGracePeriodSeconds\": 30,\n                        \"dnsPolicy\": \"Default\",\n                        \"nodeSelector\": {\n                            \"kubernetes.io/os\": \"linux\"\n                        },\n                        \"serviceAccountName\": \"coredns\",\n                        \"serviceAccount\": \"coredns\",\n                        \"securityContext\": {},\n                        \"schedulerName\": \"default-scheduler\",\n                        \"tolerations\": [\n                            {\n                                \"key\": \"CriticalAddonsOnly\",\n                                \"operator\": \"Exists\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/master\",\n                                \"effect\": \"NoSchedule\"\n                            },\n                            {\n                                \"key\": \"node-role.kubernetes.io/control-plane\",\n                                \"effect\": \"NoSchedule\"\n                            }\n                        ],\n                        \"priorityClassName\": \"system-cluster-critical\"\n                    }\n                }\n            },\n            \"status\": {\n                \"replicas\": 2,\n                \"fullyLabeledReplicas\": 2,\n                \"readyReplicas\": 2,\n                \"availableReplicas\": 2,\n                \"observedGeneration\": 1\n            }\n        }\n    ]\n}\n{\n    \"kind\": \"PodList\",\n    \"apiVersion\": \"v1\",\n    \"metadata\": {\n        \"resourceVersion\": \"12927\"\n    },\n    \"items\": [\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-56kd2\",\n                \"generateName\": \"coredns-85d9df8444-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"b6654480-cc3c-4c9b-8ccb-eef197ec69d1\",\n                \"resourceVersion\": \"678\",\n                \"creationTimestamp\": \"2021-02-23T10:16:23Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"85d9df8444\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-85d9df8444\",\n                        \"uid\": \"10220068-aaf4-4ec5-9a70-c3c696b3cd71\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-7cp7m\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-7cp7m\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"kind-worker\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/control-plane\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:09Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:23Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:23Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:09Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::3\",\n                \"podIP\": \"fd00:10:244:2::2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fd00:10:244:2::2\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:17:09Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:17:12Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n                        \"imageID\": \"sha256:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899\",\n                        \"containerID\": \"containerd://d984b53458be8cce09d3cdf4915f487f57f096361f0af6b4e3b07551af4e810d\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"coredns-85d9df8444-599l7\",\n                \"generateName\": \"coredns-85d9df8444-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"703eed7d-8557-45c1-992b-ed2155732cfc\",\n                \"resourceVersion\": \"666\",\n                \"creationTimestamp\": \"2021-02-23T10:16:23Z\",\n                \"labels\": {\n                    \"k8s-app\": \"kube-dns\",\n                    \"pod-template-hash\": \"85d9df8444\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"ReplicaSet\",\n                        \"name\": \"coredns-85d9df8444\",\n                        \"uid\": \"10220068-aaf4-4ec5-9a70-c3c696b3cd71\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"config-volume\",\n                        \"configMap\": {\n                            \"name\": \"coredns\",\n                            \"items\": [\n                                {\n                                    \"key\": \"Corefile\",\n                                    \"path\": \"Corefile\"\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-pdcwr\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n                        \"args\": [\n                            \"-conf\",\n                            \"/etc/coredns/Corefile\"\n                        ],\n                        \"ports\": [\n                            {\n                                \"name\": \"dns\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"UDP\"\n                            },\n                            {\n                                \"name\": \"dns-tcp\",\n                                \"containerPort\": 53,\n                                \"protocol\": \"TCP\"\n                            },\n                            {\n                                \"name\": \"metrics\",\n                                \"containerPort\": 9153,\n                                \"protocol\": \"TCP\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"memory\": \"170Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"70Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"config-volume\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/coredns\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-pdcwr\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 8080,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 60,\n                            \"timeoutSeconds\": 5,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 5\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/ready\",\n                                \"port\": 8181,\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"timeoutSeconds\": 1,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_BIND_SERVICE\"\n                                ],\n                                \"drop\": [\n                                    \"all\"\n                                ]\n                            },\n                            \"readOnlyRootFilesystem\": true,\n                            \"allowPrivilegeEscalation\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"Default\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"coredns\",\n                \"serviceAccount\": \"coredns\",\n                \"nodeName\": \"kind-worker2\",\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/master\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node-role.kubernetes.io/control-plane\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\",\n                        \"tolerationSeconds\": 300\n                    }\n                ],\n                \"priorityClassName\": \"system-cluster-critical\",\n                \"priority\": 2000000000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:09Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:19Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:19Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:09Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::4\",\n                \"podIP\": \"fd00:10:244:1::2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fd00:10:244:1::2\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:17:09Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"coredns\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:17:12Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/coredns/coredns:v1.8.0\",\n                        \"imageID\": \"sha256:296a6d5035e2d6919249e02709a488d680ddca91357602bd65e605eac967b899\",\n                        \"containerID\": \"containerd://67ab42ab6b01fd506554f5fa2e49fe4f9cc403e1524e30d1398c66fecbb4864f\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"etcd-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"700c86f8-4395-46a0-890e-e884517afc43\",\n                \"resourceVersion\": \"664\",\n                \"creationTimestamp\": \"2021-02-23T10:16:14Z\",\n                \"labels\": {\n                    \"component\": \"etcd\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubeadm.kubernetes.io/etcd.advertise-client-urls\": \"https://[fc00:f853:ccd:e793::2]:2379\",\n                    \"kubernetes.io/config.hash\": \"d347fbe837ab07128a9b6f6b9dcd3987\",\n                    \"kubernetes.io/config.mirror\": \"d347fbe837ab07128a9b6f6b9dcd3987\",\n                    \"kubernetes.io/config.seen\": \"2021-02-23T10:16:14.262283147Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"930ce15c-8b58-41c9-b0b6-88b9947767f3\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"etcd-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki/etcd\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etcd-data\",\n                        \"hostPath\": {\n                            \"path\": \"/var/lib/etcd\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"etcd\",\n                        \"image\": \"k8s.gcr.io/etcd:3.4.13-0\",\n                        \"command\": [\n                            \"etcd\",\n                            \"--advertise-client-urls=https://[fc00:f853:ccd:e793::2]:2379\",\n                            \"--cert-file=/etc/kubernetes/pki/etcd/server.crt\",\n                            \"--client-cert-auth=true\",\n                            \"--data-dir=/var/lib/etcd\",\n                            \"--initial-advertise-peer-urls=https://[fc00:f853:ccd:e793::2]:2380\",\n                            \"--initial-cluster=kind-control-plane=https://[fc00:f853:ccd:e793::2]:2380\",\n                            \"--key-file=/etc/kubernetes/pki/etcd/server.key\",\n                            \"--listen-client-urls=https://[::1]:2379,https://[fc00:f853:ccd:e793::2]:2379\",\n                            \"--listen-metrics-urls=http://[::1]:2381\",\n                            \"--listen-peer-urls=https://[fc00:f853:ccd:e793::2]:2380\",\n                            \"--name=kind-control-plane\",\n                            \"--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt\",\n                            \"--peer-client-cert-auth=true\",\n                            \"--peer-key-file=/etc/kubernetes/pki/etcd/peer.key\",\n                            \"--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\",\n                            \"--snapshot-count=10000\",\n                            \"--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"ephemeral-storage\": \"100Mi\",\n                                \"memory\": \"100Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"etcd-data\",\n                                \"mountPath\": \"/var/lib/etcd\"\n                            },\n                            {\n                                \"name\": \"etcd-certs\",\n                                \"mountPath\": \"/etc/kubernetes/pki/etcd\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 2381,\n                                \"host\": \"::1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 10,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"startupProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/health\",\n                                \"port\": 2381,\n                                \"host\": \"::1\",\n                                \"scheme\": \"HTTP\"\n                            },\n                            \"initialDelaySeconds\": 10,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 24\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:14Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:18Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:18Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:14Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::2\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:16:14Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"etcd\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:15:43Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/etcd:3.4.13-0\",\n                        \"imageID\": \"sha256:0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934\",\n                        \"containerID\": \"containerd://e8ff6c50a53f1a38ea35bc39ad377676a8fee72e54f9ba839902921d617cfdfc\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-6nfp4\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"aae54bf6-d4bf-4ac8-bd03-2bf5ea524b1e\",\n                \"resourceVersion\": \"577\",\n                \"creationTimestamp\": \"2021-02-23T10:16:24Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"79b58b6598\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"e5385763-f42c-4247-8a3d-88a0b31491f1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-7tdb6\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:v20210220-5b7e6d01\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"fd00:10:244::/56\"\n                            },\n                            {\n                                \"name\": \"CONTROL_PLANE_ENDPOINT\",\n                                \"value\": \"kind-control-plane:6443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-7tdb6\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-control-plane\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:57Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:02Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:02Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:57Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::2\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:16:57Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:17:02Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/kindest/kindnetd:v20210220-5b7e6d01\",\n                        \"imageID\": \"sha256:2b60427ffa5fe60e2d271d9e93c8f2994ac24b7425c855b7d2c0370ad8a4ad6c\",\n                        \"containerID\": \"containerd://b2ca63a010cddf677f9a1fb4c0a91c8acc57f094dfaa731e1373744b55dbc975\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-9cqk8\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"59d6686c-0ff3-4d5e-9fe7-be8ea0501f9c\",\n                \"resourceVersion\": \"585\",\n                \"creationTimestamp\": \"2021-02-23T10:16:35Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"79b58b6598\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"e5385763-f42c-4247-8a3d-88a0b31491f1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-qb4d2\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:v20210220-5b7e6d01\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"fd00:10:244::/56\"\n                            },\n                            {\n                                \"name\": \"CONTROL_PLANE_ENDPOINT\",\n                                \"value\": \"kind-control-plane:6443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-qb4d2\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-worker\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:57Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:03Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:03Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:57Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::3\",\n                \"podIP\": \"fc00:f853:ccd:e793::3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::3\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:16:57Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:17:02Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/kindest/kindnetd:v20210220-5b7e6d01\",\n                        \"imageID\": \"sha256:2b60427ffa5fe60e2d271d9e93c8f2994ac24b7425c855b7d2c0370ad8a4ad6c\",\n                        \"containerID\": \"containerd://c2498c71dd143d13b97332a56ac864f55d37433eea93b3bf6f6cc1782dc14f40\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kindnet-p2pbg\",\n                \"generateName\": \"kindnet-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a82c4f96-81f7-4d4d-b92f-98656fd6c2fd\",\n                \"resourceVersion\": \"581\",\n                \"creationTimestamp\": \"2021-02-23T10:16:34Z\",\n                \"labels\": {\n                    \"app\": \"kindnet\",\n                    \"controller-revision-hash\": \"79b58b6598\",\n                    \"k8s-app\": \"kindnet\",\n                    \"pod-template-generation\": \"1\",\n                    \"tier\": \"node\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kindnet\",\n                        \"uid\": \"e5385763-f42c-4247-8a3d-88a0b31491f1\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"cni-cfg\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/cni/net.d\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-6j2kp\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"image\": \"kindest/kindnetd:v20210220-5b7e6d01\",\n                        \"env\": [\n                            {\n                                \"name\": \"HOST_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.hostIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_IP\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"status.podIP\"\n                                    }\n                                }\n                            },\n                            {\n                                \"name\": \"POD_SUBNET\",\n                                \"value\": \"fd00:10:244::/56\"\n                            },\n                            {\n                                \"name\": \"CONTROL_PLANE_ENDPOINT\",\n                                \"value\": \"kind-control-plane:6443\"\n                            }\n                        ],\n                        \"resources\": {\n                            \"limits\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            },\n                            \"requests\": {\n                                \"cpu\": \"100m\",\n                                \"memory\": \"50Mi\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"cni-cfg\",\n                                \"mountPath\": \"/etc/cni/net.d\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-6j2kp\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"capabilities\": {\n                                \"add\": [\n                                    \"NET_RAW\",\n                                    \"NET_ADMIN\"\n                                ]\n                            },\n                            \"privileged\": false\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"serviceAccountName\": \"kindnet\",\n                \"serviceAccount\": \"kindnet\",\n                \"nodeName\": \"kind-worker2\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker2\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priority\": 0,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:57Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:02Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:02Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:57Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::4\",\n                \"podIP\": \"fc00:f853:ccd:e793::4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::4\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:16:57Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kindnet-cni\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:17:02Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"docker.io/kindest/kindnetd:v20210220-5b7e6d01\",\n                        \"imageID\": \"sha256:2b60427ffa5fe60e2d271d9e93c8f2994ac24b7425c855b7d2c0370ad8a4ad6c\",\n                        \"containerID\": \"containerd://bdd504fd0fb10fae4096182d2e8bc9dd9c6d50382bccac4c83b76a98493ed0a0\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Guaranteed\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-apiserver-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"324e3c1a-8d25-42b6-9390-05243c7b6e5f\",\n                \"resourceVersion\": \"371\",\n                \"creationTimestamp\": \"2021-02-23T10:16:15Z\",\n                \"labels\": {\n                    \"component\": \"kube-apiserver\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint\": \"[fc00:f853:ccd:e793::2]:6443\",\n                    \"kubernetes.io/config.hash\": \"a1b1db42a3439ade62a0b98a8000af92\",\n                    \"kubernetes.io/config.mirror\": \"a1b1db42a3439ade62a0b98a8000af92\",\n                    \"kubernetes.io/config.seen\": \"2021-02-23T10:16:14.262286685Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"930ce15c-8b58-41c9-b0b6-88b9947767f3\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"ca-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl/certs\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"k8s-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-local-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"image\": \"k8s.gcr.io/kube-apiserver:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"command\": [\n                            \"kube-apiserver\",\n                            \"--advertise-address=fc00:f853:ccd:e793::2\",\n                            \"--allow-privileged=true\",\n                            \"--authorization-mode=Node,RBAC\",\n                            \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--enable-admission-plugins=NodeRestriction\",\n                            \"--enable-bootstrap-token-auth=true\",\n                            \"--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt\",\n                            \"--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt\",\n                            \"--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key\",\n                            \"--etcd-servers=https://[::1]:2379\",\n                            \"--insecure-port=0\",\n                            \"--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt\",\n                            \"--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key\",\n                            \"--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname\",\n                            \"--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt\",\n                            \"--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key\",\n                            \"--requestheader-allowed-names=front-proxy-client\",\n                            \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n                            \"--requestheader-extra-headers-prefix=X-Remote-Extra-\",\n                            \"--requestheader-group-headers=X-Remote-Group\",\n                            \"--requestheader-username-headers=X-Remote-User\",\n                            \"--runtime-config=\",\n                            \"--secure-port=6443\",\n                            \"--service-account-issuer=https://kubernetes.default.svc.cluster.local\",\n                            \"--service-account-key-file=/etc/kubernetes/pki/sa.pub\",\n                            \"--service-account-signing-key-file=/etc/kubernetes/pki/sa.key\",\n                            \"--service-cluster-ip-range=fd00:10:96::/112\",\n                            \"--tls-cert-file=/etc/kubernetes/pki/apiserver.crt\",\n                            \"--tls-private-key-file=/etc/kubernetes/pki/apiserver.key\",\n                            \"--v=4\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"250m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"ca-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"etc-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"k8s-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/pki\"\n                            },\n                            {\n                                \"name\": \"usr-local-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/share/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"usr-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ca-certificates\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/livez\",\n                                \"port\": 6443,\n                                \"host\": \"fc00:f853:ccd:e793::2\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 10,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"readinessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/readyz\",\n                                \"port\": 6443,\n                                \"host\": \"fc00:f853:ccd:e793::2\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 1,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 3\n                        },\n                        \"startupProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/livez\",\n                                \"port\": 6443,\n                                \"host\": \"fc00:f853:ccd:e793::2\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 10,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 24\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:14Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:23Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:23Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:14Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::2\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:16:14Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-apiserver\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:16:01Z\"\n                            }\n                        },\n                        \"lastState\": {\n                            \"terminated\": {\n                                \"exitCode\": 1,\n                                \"reason\": \"Error\",\n                                \"startedAt\": \"2021-02-23T10:15:12Z\",\n                                \"finishedAt\": \"2021-02-23T10:15:33Z\",\n                                \"containerID\": \"containerd://665b334d4068083ecdb19215a9475db5aaafe5466917292857c73423cc1803b4\"\n                            }\n                        },\n                        \"ready\": true,\n                        \"restartCount\": 1,\n                        \"image\": \"k8s.gcr.io/kube-apiserver:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"imageID\": \"sha256:3ed397d32bfb71d5aab8a01cfe745d6a4a58823ebe179e4d4f03fbeb6dbe2bcd\",\n                        \"containerID\": \"containerd://e65aaeef1e9d9967cccfd02d68e3118ae60f0c636161e90ae516fde780ebb8fb\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-controller-manager-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"a78e5bc1-4b5c-430c-a7f5-85b819832d9a\",\n                \"resourceVersion\": \"777\",\n                \"creationTimestamp\": \"2021-02-23T10:16:14Z\",\n                \"labels\": {\n                    \"component\": \"kube-controller-manager\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"3b9fca6fca5bf7a70d775193b6953c00\",\n                    \"kubernetes.io/config.mirror\": \"3b9fca6fca5bf7a70d775193b6953c00\",\n                    \"kubernetes.io/config.seen\": \"2021-02-23T10:16:14.262269733Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"930ce15c-8b58-41c9-b0b6-88b9947767f3\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"ca-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ssl/certs\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"etc-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"flexvolume-dir\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"k8s-certs\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/pki\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/controller-manager.conf\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-local-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/local/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"usr-share-ca-certificates\",\n                        \"hostPath\": {\n                            \"path\": \"/usr/share/ca-certificates\",\n                            \"type\": \"DirectoryOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"image\": \"k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"command\": [\n                            \"kube-controller-manager\",\n                            \"--allocate-node-cidrs=true\",\n                            \"--authentication-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--authorization-kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--bind-address=::\",\n                            \"--client-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--cluster-cidr=fd00:10:244::/56\",\n                            \"--cluster-name=kind\",\n                            \"--cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--cluster-signing-key-file=/etc/kubernetes/pki/ca.key\",\n                            \"--controllers=*,bootstrapsigner,tokencleaner\",\n                            \"--enable-hostpath-provisioner=true\",\n                            \"--kubeconfig=/etc/kubernetes/controller-manager.conf\",\n                            \"--leader-elect=true\",\n                            \"--port=0\",\n                            \"--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt\",\n                            \"--root-ca-file=/etc/kubernetes/pki/ca.crt\",\n                            \"--service-account-private-key-file=/etc/kubernetes/pki/sa.key\",\n                            \"--service-cluster-ip-range=fd00:10:96::/112\",\n                            \"--use-service-account-credentials=true\",\n                            \"--v=4\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"200m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"ca-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ssl/certs\"\n                            },\n                            {\n                                \"name\": \"etc-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"flexvolume-dir\",\n                                \"mountPath\": \"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\"\n                            },\n                            {\n                                \"name\": \"k8s-certs\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/pki\"\n                            },\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/controller-manager.conf\"\n                            },\n                            {\n                                \"name\": \"usr-local-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/local/share/ca-certificates\"\n                            },\n                            {\n                                \"name\": \"usr-share-ca-certificates\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/usr/share/ca-certificates\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10257,\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 10,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"startupProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10257,\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 10,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 24\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:14Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:45Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:45Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:14Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::2\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:16:14Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-controller-manager\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:15:22Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-controller-manager:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"imageID\": \"sha256:dcb5e4aeaa4ad4af4fc84365e21dab3381d30d14eb2ee3471b6903777d8dac47\",\n                        \"containerID\": \"containerd://150e50a50f3f3e43d507902c80e60790fe9006e74ca50b6c194bdcbbdc2ce352\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-cfsxs\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"06f0a0b3-26e8-4de6-91ce-3c6845586bde\",\n                \"resourceVersion\": \"779\",\n                \"creationTimestamp\": \"2021-02-23T10:17:44Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"7dd75d4c87\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"2\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-2rn5z\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\",\n                            \"--v=4\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-2rn5z\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-worker2\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker2\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:44Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:45Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:45Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:44Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::4\",\n                \"podIP\": \"fc00:f853:ccd:e793::4\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::4\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:17:44Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:17:45Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"imageID\": \"sha256:43f654ab46f299d986697c1075c6c43d3773921c5ebdbd93cee976bbc407f0d1\",\n                        \"containerID\": \"containerd://8866a8630540d919b4d6f8aa2d4119ff20b6a3e67d3b6e4af8854f3e4ec49713\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-gv4kg\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"04827eed-042f-43d7-a3b9-0aec10925371\",\n                \"resourceVersion\": \"701\",\n                \"creationTimestamp\": \"2021-02-23T10:17:24Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"7dd75d4c87\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"2\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-2gjhw\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\",\n                            \"--v=4\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-2gjhw\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-control-plane\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:24Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:25Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:25Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:24Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::2\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:17:24Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:17:25Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"imageID\": \"sha256:43f654ab46f299d986697c1075c6c43d3773921c5ebdbd93cee976bbc407f0d1\",\n                        \"containerID\": \"containerd://9a69f635e0e3a0d66e15da463e933d7f7675f92c9444a0ab5cd3748b78710a22\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-proxy-mg2kq\",\n                \"generateName\": \"kube-proxy-\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"751c0eee-e925-48c4-98c6-4ab70cfa1d9c\",\n                \"resourceVersion\": \"741\",\n                \"creationTimestamp\": \"2021-02-23T10:17:35Z\",\n                \"labels\": {\n                    \"controller-revision-hash\": \"7dd75d4c87\",\n                    \"k8s-app\": \"kube-proxy\",\n                    \"pod-template-generation\": \"2\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"apps/v1\",\n                        \"kind\": \"DaemonSet\",\n                        \"name\": \"kube-proxy\",\n                        \"uid\": \"cccc0f15-cff7-4fb6-8452-09ec7c055a10\",\n                        \"controller\": true,\n                        \"blockOwnerDeletion\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"configMap\": {\n                            \"name\": \"kube-proxy\",\n                            \"defaultMode\": 420\n                        }\n                    },\n                    {\n                        \"name\": \"xtables-lock\",\n                        \"hostPath\": {\n                            \"path\": \"/run/xtables.lock\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    },\n                    {\n                        \"name\": \"lib-modules\",\n                        \"hostPath\": {\n                            \"path\": \"/lib/modules\",\n                            \"type\": \"\"\n                        }\n                    },\n                    {\n                        \"name\": \"kube-api-access-7vstx\",\n                        \"projected\": {\n                            \"sources\": [\n                                {\n                                    \"serviceAccountToken\": {\n                                        \"expirationSeconds\": 3607,\n                                        \"path\": \"token\"\n                                    }\n                                },\n                                {\n                                    \"configMap\": {\n                                        \"name\": \"kube-root-ca.crt\",\n                                        \"items\": [\n                                            {\n                                                \"key\": \"ca.crt\",\n                                                \"path\": \"ca.crt\"\n                                            }\n                                        ]\n                                    }\n                                },\n                                {\n                                    \"downwardAPI\": {\n                                        \"items\": [\n                                            {\n                                                \"path\": \"namespace\",\n                                                \"fieldRef\": {\n                                                    \"apiVersion\": \"v1\",\n                                                    \"fieldPath\": \"metadata.namespace\"\n                                                }\n                                            }\n                                        ]\n                                    }\n                                }\n                            ],\n                            \"defaultMode\": 420\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"command\": [\n                            \"/usr/local/bin/kube-proxy\",\n                            \"--config=/var/lib/kube-proxy/config.conf\",\n                            \"--hostname-override=$(NODE_NAME)\",\n                            \"--v=4\"\n                        ],\n                        \"env\": [\n                            {\n                                \"name\": \"NODE_NAME\",\n                                \"valueFrom\": {\n                                    \"fieldRef\": {\n                                        \"apiVersion\": \"v1\",\n                                        \"fieldPath\": \"spec.nodeName\"\n                                    }\n                                }\n                            }\n                        ],\n                        \"resources\": {},\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kube-proxy\",\n                                \"mountPath\": \"/var/lib/kube-proxy\"\n                            },\n                            {\n                                \"name\": \"xtables-lock\",\n                                \"mountPath\": \"/run/xtables.lock\"\n                            },\n                            {\n                                \"name\": \"lib-modules\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/lib/modules\"\n                            },\n                            {\n                                \"name\": \"kube-api-access-7vstx\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\"\n                            }\n                        ],\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\",\n                        \"securityContext\": {\n                            \"privileged\": true\n                        }\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeSelector\": {\n                    \"kubernetes.io/os\": \"linux\"\n                },\n                \"serviceAccountName\": \"kube-proxy\",\n                \"serviceAccount\": \"kube-proxy\",\n                \"nodeName\": \"kind-worker\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"affinity\": {\n                    \"nodeAffinity\": {\n                        \"requiredDuringSchedulingIgnoredDuringExecution\": {\n                            \"nodeSelectorTerms\": [\n                                {\n                                    \"matchFields\": [\n                                        {\n                                            \"key\": \"metadata.name\",\n                                            \"operator\": \"In\",\n                                            \"values\": [\n                                                \"kind-worker\"\n                                            ]\n                                        }\n                                    ]\n                                }\n                            ]\n                        }\n                    }\n                },\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"key\": \"CriticalAddonsOnly\",\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"operator\": \"Exists\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/not-ready\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unreachable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/disk-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/memory-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/pid-pressure\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/unschedulable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    },\n                    {\n                        \"key\": \"node.kubernetes.io/network-unavailable\",\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoSchedule\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:35Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:36Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:36Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:35Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::3\",\n                \"podIP\": \"fc00:f853:ccd:e793::3\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::3\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:17:35Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-proxy\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:17:35Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-proxy:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"imageID\": \"sha256:43f654ab46f299d986697c1075c6c43d3773921c5ebdbd93cee976bbc407f0d1\",\n                        \"containerID\": \"containerd://cc69422acee095af5dfc66407e77ff5d088616ffd47f436b312caf35c342661c\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"BestEffort\"\n            }\n        },\n        {\n            \"metadata\": {\n                \"name\": \"kube-scheduler-kind-control-plane\",\n                \"namespace\": \"kube-system\",\n                \"uid\": \"4181184e-a64c-4c89-8f2e-b05d2e355d8c\",\n                \"resourceVersion\": \"746\",\n                \"creationTimestamp\": \"2021-02-23T10:16:14Z\",\n                \"labels\": {\n                    \"component\": \"kube-scheduler\",\n                    \"tier\": \"control-plane\"\n                },\n                \"annotations\": {\n                    \"kubernetes.io/config.hash\": \"899296709c7e105547edfc227d6ddf54\",\n                    \"kubernetes.io/config.mirror\": \"899296709c7e105547edfc227d6ddf54\",\n                    \"kubernetes.io/config.seen\": \"2021-02-23T10:16:14.262279535Z\",\n                    \"kubernetes.io/config.source\": \"file\"\n                },\n                \"ownerReferences\": [\n                    {\n                        \"apiVersion\": \"v1\",\n                        \"kind\": \"Node\",\n                        \"name\": \"kind-control-plane\",\n                        \"uid\": \"930ce15c-8b58-41c9-b0b6-88b9947767f3\",\n                        \"controller\": true\n                    }\n                ]\n            },\n            \"spec\": {\n                \"volumes\": [\n                    {\n                        \"name\": \"kubeconfig\",\n                        \"hostPath\": {\n                            \"path\": \"/etc/kubernetes/scheduler.conf\",\n                            \"type\": \"FileOrCreate\"\n                        }\n                    }\n                ],\n                \"containers\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"image\": \"k8s.gcr.io/kube-scheduler:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"command\": [\n                            \"kube-scheduler\",\n                            \"--address=::\",\n                            \"--authentication-kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--authorization-kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--bind-address=::1\",\n                            \"--kubeconfig=/etc/kubernetes/scheduler.conf\",\n                            \"--leader-elect=true\",\n                            \"--port=0\",\n                            \"--v=4\"\n                        ],\n                        \"resources\": {\n                            \"requests\": {\n                                \"cpu\": \"100m\"\n                            }\n                        },\n                        \"volumeMounts\": [\n                            {\n                                \"name\": \"kubeconfig\",\n                                \"readOnly\": true,\n                                \"mountPath\": \"/etc/kubernetes/scheduler.conf\"\n                            }\n                        ],\n                        \"livenessProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10259,\n                                \"host\": \"::1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 10,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 8\n                        },\n                        \"startupProbe\": {\n                            \"httpGet\": {\n                                \"path\": \"/healthz\",\n                                \"port\": 10259,\n                                \"host\": \"::1\",\n                                \"scheme\": \"HTTPS\"\n                            },\n                            \"initialDelaySeconds\": 10,\n                            \"timeoutSeconds\": 15,\n                            \"periodSeconds\": 10,\n                            \"successThreshold\": 1,\n                            \"failureThreshold\": 24\n                        },\n                        \"terminationMessagePath\": \"/dev/termination-log\",\n                        \"terminationMessagePolicy\": \"File\",\n                        \"imagePullPolicy\": \"IfNotPresent\"\n                    }\n                ],\n                \"restartPolicy\": \"Always\",\n                \"terminationGracePeriodSeconds\": 30,\n                \"dnsPolicy\": \"ClusterFirst\",\n                \"nodeName\": \"kind-control-plane\",\n                \"hostNetwork\": true,\n                \"securityContext\": {},\n                \"schedulerName\": \"default-scheduler\",\n                \"tolerations\": [\n                    {\n                        \"operator\": \"Exists\",\n                        \"effect\": \"NoExecute\"\n                    }\n                ],\n                \"priorityClassName\": \"system-node-critical\",\n                \"priority\": 2000001000,\n                \"enableServiceLinks\": true,\n                \"preemptionPolicy\": \"PreemptLowerPriority\"\n            },\n            \"status\": {\n                \"phase\": \"Running\",\n                \"conditions\": [\n                    {\n                        \"type\": \"Initialized\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:14Z\"\n                    },\n                    {\n                        \"type\": \"Ready\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:36Z\"\n                    },\n                    {\n                        \"type\": \"ContainersReady\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:17:36Z\"\n                    },\n                    {\n                        \"type\": \"PodScheduled\",\n                        \"status\": \"True\",\n                        \"lastProbeTime\": null,\n                        \"lastTransitionTime\": \"2021-02-23T10:16:14Z\"\n                    }\n                ],\n                \"hostIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIP\": \"fc00:f853:ccd:e793::2\",\n                \"podIPs\": [\n                    {\n                        \"ip\": \"fc00:f853:ccd:e793::2\"\n                    }\n                ],\n                \"startTime\": \"2021-02-23T10:16:14Z\",\n                \"containerStatuses\": [\n                    {\n                        \"name\": \"kube-scheduler\",\n                        \"state\": {\n                            \"running\": {\n                                \"startedAt\": \"2021-02-23T10:15:31Z\"\n                            }\n                        },\n                        \"lastState\": {},\n                        \"ready\": true,\n                        \"restartCount\": 0,\n                        \"image\": \"k8s.gcr.io/kube-scheduler:v1.21.0-alpha.3.456_c7e85d33636431\",\n                        \"imageID\": \"sha256:ede8e76d81772ab7735883bfdb956aa25054e0cff63da06bb79f4c1a3769dbbe\",\n                        \"containerID\": \"containerd://34587e11beb099de182244cb97e8800b2af366ca37a031ceff626d619bc97f88\",\n                        \"started\": true\n                    }\n                ],\n                \"qosClass\": \"Burstable\"\n            }\n        }\n    ]\n}\n==== START logs for container coredns of pod kube-system/coredns-85d9df8444-56kd2 ====\n[INFO] plugin/ready: Still waiting on: \"kubernetes\"\n.:53\n[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7\nCoreDNS-1.8.0\nlinux/amd64, go1.15.3, 054c9ae\n[ERROR] plugin/errors: 2 1178846345762078538.7388362400828524416. HINFO: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[INFO] Reloading\n[INFO] plugin/health: Going into lameduck mode for 5s\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[INFO] plugin/reload: Running configuration MD5 = 7bfe91ef5b9783c748c3420bf7763479\n[INFO] Reloading complete\n[ERROR] plugin/errors: 2 localhost. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n==== END logs for container coredns of pod kube-system/coredns-85d9df8444-56kd2 ====\n==== START logs for container coredns of pod kube-system/coredns-85d9df8444-599l7 ====\n.:53\n[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7\nCoreDNS-1.8.0\nlinux/amd64, go1.15.3, 054c9ae\n[ERROR] plugin/errors: 2 3450749769798322286.7902067465899682772. HINFO: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[INFO] Reloading\n[INFO] plugin/health: Going into lameduck mode for 5s\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 nodeport-service.services-1528.svc.cluster.local.c.k8s-infra-prow-build.internal. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-2--20.dns-3718.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--22.dns-863.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 fd00-10-244-1--6.dns-645.pod.cluster.local.c.k8s-infra-prow-build.internal. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[INFO] plugin/reload: Running configuration MD5 = 7bfe91ef5b9783c748c3420bf7763479\n[INFO] Reloading complete\n[ERROR] plugin/errors: 2 localhost. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 localhost. A: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 dns-test-service.dns-1862.svc. AAAA: dial tcp 172.18.0.1:53: connect: network is unreachable\n[ERROR] plugin/errors: 2 _http._tcp.dns-test-service.dns-1862.svc. SRV: dial udp 172.18.0.1:53: connect: network is unreachable\n==== END logs for container coredns of pod kube-system/coredns-85d9df8444-599l7 ====\n==== START logs for container etcd of pod kube-system/etcd-kind-control-plane ====\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2021-02-23 10:15:43.181287 I | etcdmain: etcd Version: 3.4.13\n2021-02-23 10:15:43.181342 I | etcdmain: Git SHA: ae9734ed2\n2021-02-23 10:15:43.181347 I | etcdmain: Go Version: go1.12.17\n2021-02-23 10:15:43.181351 I | etcdmain: Go OS/Arch: linux/amd64\n2021-02-23 10:15:43.181356 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8\n[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead\n2021-02-23 10:15:43.181457 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2021-02-23 10:15:43.182334 I | embed: name = kind-control-plane\n2021-02-23 10:15:43.182357 I | embed: data dir = /var/lib/etcd\n2021-02-23 10:15:43.182361 I | embed: member dir = /var/lib/etcd/member\n2021-02-23 10:15:43.182364 I | embed: heartbeat = 100ms\n2021-02-23 10:15:43.182367 I | embed: election = 1000ms\n2021-02-23 10:15:43.182370 I | embed: snapshot count = 10000\n2021-02-23 10:15:43.182377 I | embed: advertise client URLs = https://[fc00:f853:ccd:e793::2]:2379\n2021-02-23 10:15:43.192829 I | etcdserver: starting member cb2acda43b5ed426 in cluster e0285316cc52b0f2\nraft2021/02/23 10:15:43 INFO: cb2acda43b5ed426 switched to configuration voters=()\nraft2021/02/23 10:15:43 INFO: cb2acda43b5ed426 became follower at term 0\nraft2021/02/23 10:15:43 INFO: newRaft cb2acda43b5ed426 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]\nraft2021/02/23 10:15:43 INFO: cb2acda43b5ed426 became follower at term 1\nraft2021/02/23 10:15:43 INFO: cb2acda43b5ed426 switched to configuration voters=(14639739643975619622)\n2021-02-23 10:15:43.197400 W | auth: simple token is not cryptographically signed\n2021-02-23 10:15:43.205077 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]\n2021-02-23 10:15:43.205292 I | etcdserver: cb2acda43b5ed426 as single-node; fast-forwarding 9 ticks (election ticks 10)\nraft2021/02/23 10:15:43 INFO: cb2acda43b5ed426 switched to configuration voters=(14639739643975619622)\n2021-02-23 10:15:43.205912 I | etcdserver/membership: added member cb2acda43b5ed426 [https://[fc00:f853:ccd:e793::2]:2380] to cluster e0285316cc52b0f2\n2021-02-23 10:15:43.207331 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = \n2021-02-23 10:15:43.207402 I | embed: listening for peers on [fc00:f853:ccd:e793::2]:2380\n2021-02-23 10:15:43.207567 I | embed: listening for metrics on http://[::1]:2381\nraft2021/02/23 10:15:43 INFO: cb2acda43b5ed426 is starting a new election at term 1\nraft2021/02/23 10:15:43 INFO: cb2acda43b5ed426 became candidate at term 2\nraft2021/02/23 10:15:43 INFO: cb2acda43b5ed426 received MsgVoteResp from cb2acda43b5ed426 at term 2\nraft2021/02/23 10:15:43 INFO: cb2acda43b5ed426 became leader at term 2\nraft2021/02/23 10:15:43 INFO: raft.node: cb2acda43b5ed426 elected leader cb2acda43b5ed426 at term 2\n2021-02-23 10:15:43.794554 I | etcdserver: published {Name:kind-control-plane ClientURLs:[https://[fc00:f853:ccd:e793::2]:2379]} to cluster e0285316cc52b0f2\n2021-02-23 10:15:43.794585 I | embed: ready to serve client requests\n2021-02-23 10:15:43.794658 I | etcdserver: setting up the initial cluster version to 3.4\n2021-02-23 10:15:43.794752 I | embed: ready to serve client requests\n2021-02-23 10:15:43.796176 N | etcdserver/membership: set the initial cluster version to 3.4\n2021-02-23 10:15:43.796404 I | etcdserver/api: enabled capabilities for version 3.4\n2021-02-23 10:15:43.797705 I | embed: serving client requests on [::1]:2379\n2021-02-23 10:15:43.797793 I | embed: serving client requests on [fc00:f853:ccd:e793::2]:2379\n2021-02-23 10:15:54.218916 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:15:58.008907 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:16:08.008536 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:16:19.616266 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:16:26.699258 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:16:36.699073 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:16:46.699129 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:16:56.698943 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:17:01.081793 W | etcdserver: read-only range request \"key:\\\"/registry/minions/\\\" range_end:\\\"/registry/minions0\\\" limit:500 \" with result \"range_response_count:3 size:13170\" took too long (128.843274ms) to execute\n2021-02-23 10:17:06.699574 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:17:16.699253 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:17:26.699400 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:17:36.699021 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:17:46.713368 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:17:52.239218 W | etcdserver: read-only range request \"key:\\\"/registry/services/specs/apf-3319/\\\" range_end:\\\"/registry/services/specs/apf-33190\\\" \" with result \"range_response_count:0 size:5\" took too long (124.275852ms) to execute\n2021-02-23 10:17:52.239279 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/metadata-concealment-5894/default\\\" \" with result \"range_response_count:1 size:210\" took too long (121.186772ms) to execute\n2021-02-23 10:17:52.239530 W | etcdserver: read-only range request \"key:\\\"/registry/controllers/metadata-concealment-5894/\\\" range_end:\\\"/registry/controllers/metadata-concealment-58940\\\" \" with result \"range_response_count:0 size:5\" took too long (122.57864ms) to execute\n2021-02-23 10:17:52.239715 W | etcdserver: read-only range request \"key:\\\"/registry/daemonsets/kubectl-3902/\\\" range_end:\\\"/registry/daemonsets/kubectl-39020\\\" \" with result \"range_response_count:0 size:5\" took too long (124.462913ms) to execute\n2021-02-23 10:17:52.530654 W | etcdserver: read-only range request \"key:\\\"/registry/statefulsets/metadata-concealment-5894/\\\" range_end:\\\"/registry/statefulsets/metadata-concealment-58940\\\" \" with result \"range_response_count:0 size:5\" took too long (275.643776ms) to execute\n2021-02-23 10:17:52.531066 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/apf-3319\\\" \" with result \"range_response_count:1 size:1817\" took too long (273.829104ms) to execute\n2021-02-23 10:17:52.531192 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/init-container-9048/default\\\" \" with result \"range_response_count:1 size:233\" took too long (206.719598ms) to execute\n2021-02-23 10:17:52.531324 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (243.12332ms) to execute\n2021-02-23 10:17:52.531657 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/clientset-619\\\" \" with result \"range_response_count:1 size:427\" took too long (138.048342ms) to execute\n2021-02-23 10:17:52.531837 W | etcdserver: read-only range request \"key:\\\"/registry/pods/secrets-2648/pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6\\\" \" with result \"range_response_count:1 size:1438\" took too long (238.725912ms) to execute\n2021-02-23 10:17:52.531991 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (253.851384ms) to execute\n2021-02-23 10:17:52.709622 W | etcdserver: read-only range request \"key:\\\"/registry/persistentvolumeclaims/clientset-619/\\\" range_end:\\\"/registry/persistentvolumeclaims/clientset-6190\\\" \" with result \"range_response_count:0 size:5\" took too long (143.674106ms) to execute\n2021-02-23 10:17:52.709670 W | etcdserver: read-only range request \"key:\\\"/registry/namespaces/apf-3319\\\" \" with result \"range_response_count:1 size:1817\" took too long (155.383825ms) to execute\n2021-02-23 10:17:52.709733 W | etcdserver: read-only range request \"key:\\\"/registry/pods/dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14\\\" \" with result \"range_response_count:1 size:5105\" took too long (112.401074ms) to execute\n2021-02-23 10:17:52.709780 W | etcdserver: read-only range request \"key:\\\"/registry/pods/pod-network-test-7050/netserver-0\\\" \" with result \"range_response_count:1 size:3168\" took too long (153.004686ms) to execute\n2021-02-23 10:17:52.709904 W | etcdserver: read-only range request \"key:\\\"/registry/events/kubectl-3902/\\\" range_end:\\\"/registry/events/kubectl-39020\\\" \" with result \"range_response_count:0 size:5\" took too long (157.284304ms) to execute\n2021-02-23 10:17:53.599737 W | etcdserver: read-only range request \"key:\\\"/registry/pods/security-context-6895/security-context-03701a6a-005f-4014-bebf-7d0057f9af66\\\" \" with result \"range_response_count:1 size:2368\" took too long (110.307957ms) to execute\n2021-02-23 10:17:56.792384 W | etcdserver: read-only range request \"key:\\\"/registry/pods/prestop-3547/pod-prestop-hook-f3791718-6c38-4163-b791-01dd5e75ff6e\\\" \" with result \"range_response_count:1 size:2441\" took too long (301.762724ms) to execute\n2021-02-23 10:17:56.792652 W | etcdserver: request \"header:<ID:15287037713466877759 username:\\\"kube-apiserver-etcd-client\\\" auth_revision:1 > txn:<compare:<target:MOD key:\\\"/registry/events/prestop-3547/pod-prestop-hook-f3791718-6c38-4163-b791-01dd5e75ff6e.1666591818bc1cb7\\\" mod_revision:0 > success:<request_put:<key:\\\"/registry/events/prestop-3547/pod-prestop-hook-f3791718-6c38-4163-b791-01dd5e75ff6e.1666591818bc1cb7\\\" value_size:739 lease:6063665676612099890 >> failure:<>>\" with result \"size:16\" took too long (207.827374ms) to execute\n2021-02-23 10:17:56.792888 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:17:56.793161 W | etcdserver: read-only range request \"key:\\\"/registry/pods/secrets-2648/pod-configmaps-eeca3788-f47e-4e94-b2bd-edc5214a72b6\\\" \" with result \"range_response_count:1 size:2324\" took too long (252.276451ms) to execute\n2021-02-23 10:17:56.793220 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:494\" took too long (237.287956ms) to execute\n2021-02-23 10:17:56.793397 W | etcdserver: read-only range request \"key:\\\"/registry/pods/job-8470/all-pods-removed-zvzdk\\\" \" with result \"range_response_count:1 size:1976\" took too long (270.329878ms) to execute\n2021-02-23 10:17:56.793525 W | etcdserver: read-only range request \"key:\\\"/registry/pods/dns-645/dns-test-f42b26ad-919b-44eb-9481-0ab3a73e5e14\\\" \" with result \"range_response_count:1 size:5105\" took too long (196.538446ms) to execute\n2021-02-23 10:17:56.793559 W | etcdserver: read-only range request \"key:\\\"/registry/pods/pod-network-test-7050/netserver-0\\\" \" with result \"range_response_count:1 size:3476\" took too long (237.636491ms) to execute\n2021-02-23 10:17:56.793721 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:521\" took too long (153.768281ms) to execute\n2021-02-23 10:17:56.793804 W | etcdserver: read-only range request \"key:\\\"/registry/mygroup.example.com/foo2kcnnas/ownerbrsc8\\\" \" with result \"range_response_count:0 size:5\" took too long (170.815666ms) to execute\n2021-02-23 10:17:56.793892 W | etcdserver: read-only range request \"key:\\\"/registry/mygroup.example.com/foo2kcnnas/canarynlzch\\\" \" with result \"range_response_count:1 size:386\" took too long (170.934374ms) to execute\n2021-02-23 10:17:56.960591 W | etcdserver: read-only range request \"key:\\\"/registry/mygroup.example.com/foo2kcnnas/dependent6xfmf\\\" \" with result \"range_response_count:1 size:764\" took too long (157.915429ms) to execute\n2021-02-23 10:17:56.960893 W | etcdserver: read-only range request \"key:\\\"/registry/pods/var-expansion-9392/var-expansion-5bb0dcc0-dc5f-4361-8a42-f8eb9940bbe7\\\" \" with result \"range_response_count:1 size:2589\" took too long (145.353749ms) to execute\n2021-02-23 10:17:56.961074 W | etcdserver: read-only range request \"key:\\\"/registry/mygroup.example.com/foo2kcnnas/canarynlzch\\\" \" with result \"range_response_count:1 size:386\" took too long (155.225019ms) to execute\n2021-02-23 10:18:05.309534 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/kubectl-8722/default\\\" \" with result \"range_response_count:1 size:183\" took too long (142.495089ms) to execute\n2021-02-23 10:18:05.309968 W | etcdserver: read-only range request \"key:\\\"/registry/pods/container-probe-6668/startup-93bd37f4-8723-4228-9ca2-ce06c1e0996f\\\" \" with result \"range_response_count:1 size:3023\" took too long (110.897384ms) to execute\n2021-02-23 10:18:06.699109 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:18:06.976788 W | etcdserver: read-only range request \"key:\\\"/registry/ranges/serviceips\\\" \" with result \"range_response_count:1 size:8247\" took too long (108.153894ms) to execute\n2021-02-23 10:18:16.699038 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:18:26.701349 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:18:36.699090 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:18:46.708498 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:18:48.434270 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/watch-2927/\\\" range_end:\\\"/registry/resourcequotas/watch-29270\\\" \" with result \"range_response_count:0 size:5\" took too long (233.497411ms) to execute\n2021-02-23 10:18:48.434332 W | etcdserver: read-only range request \"key:\\\"/registry/events/kubectl-9897/httpd-deployment-778bb9db-xqp5s.166659238f7a4108\\\" \" with result \"range_response_count:0 size:5\" took too long (151.279409ms) to execute\n2021-02-23 10:18:48.434385 W | etcdserver: read-only range request \"key:\\\"/registry/pods/disruption-9024/rs-ddrxj\\\" \" with result \"range_response_count:1 size:2820\" took too long (218.782314ms) to execute\n2021-02-23 10:18:48.434420 W | etcdserver: read-only range request \"key:\\\"/registry/pods/kubectl-7640/run-log-test\\\" \" with result \"range_response_count:1 size:2394\" took too long (111.187556ms) to execute\n2021-02-23 10:18:48.434753 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/runtimeclass-3012/default\\\" \" with result \"range_response_count:1 size:193\" took too long (239.091451ms) to execute\n2021-02-23 10:18:48.434934 W | etcdserver: read-only range request \"key:\\\"/registry/pods/nettest-2938/netserver-0\\\" \" with result \"range_response_count:1 size:2261\" took too long (150.62003ms) to execute\n2021-02-23 10:18:48.435038 W | etcdserver: read-only range request \"key:\\\"/registry/events/job-8470/all-pods-removed-zvzdk.16665918a9f287be\\\" \" with result \"range_response_count:1 size:757\" took too long (239.412145ms) to execute\n2021-02-23 10:18:56.700190 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:19:06.699203 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:19:16.698945 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:19:26.701702 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:19:36.701785 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:19:43.559360 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:19:43.977819 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:19:46.699070 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:19:56.703865 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:20:00.543223 W | etcdserver: read-only range request \"key:\\\"/registry/deployments/webhook-5468/sample-webhook-deployment\\\" \" with result \"range_response_count:1 size:3087\" took too long (107.282929ms) to execute\n2021-02-23 10:20:01.910580 W | etcdserver: read-only range request \"key:\\\"/registry/controllers/gc-3480/simpletest.rc\\\" \" with result \"range_response_count:1 size:1309\" took too long (136.884464ms) to execute\n2021-02-23 10:20:01.911104 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/statefulset-6532/\\\" range_end:\\\"/registry/resourcequotas/statefulset-65320\\\" \" with result \"range_response_count:0 size:5\" took too long (112.575519ms) to execute\n2021-02-23 10:20:01.921804 W | etcdserver: read-only range request \"key:\\\"/registry/resourcequotas/conntrack-5237/\\\" range_end:\\\"/registry/resourcequotas/conntrack-52370\\\" \" with result \"range_response_count:0 size:5\" took too long (108.044339ms) to execute\n2021-02-23 10:20:01.923199 W | etcdserver: read-only range request \"key:\\\"/registry/statefulsets/statefulset-6532/ss2\\\" \" with result \"range_response_count:1 size:1661\" took too long (110.393472ms) to execute\n2021-02-23 10:20:01.923613 W | etcdserver: read-only range request \"key:\\\"/registry/statefulsets/disruption-147/\\\" range_end:\\\"/registry/statefulsets/disruption-1470\\\" \" with result \"range_response_count:0 size:5\" took too long (112.931721ms) to execute\n2021-02-23 10:20:02.405664 W | etcdserver: read-only range request \"key:\\\"/registry/ingress/dns-8294/\\\" range_end:\\\"/registry/ingress/dns-82940\\\" \" with result \"range_response_count:0 size:5\" took too long (210.090035ms) to execute\n2021-02-23 10:20:02.407752 W | etcdserver: read-only range request \"key:\\\"/registry/pods/services-5884/execpodzc2ll\\\" \" with result \"range_response_count:1 size:2331\" took too long (210.383666ms) to execute\n2021-02-23 10:20:02.408355 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-scheduler\\\" \" with result \"range_response_count:1 size:493\" took too long (210.764522ms) to execute\n2021-02-23 10:20:02.409362 W | etcdserver: read-only range request \"key:\\\"/registry/pods/pods-5083/pod-update-1856e3bf-b4e1-4903-9ffa-3d3cf59f041d\\\" \" with result \"range_response_count:1 size:1351\" took too long (213.84425ms) to execute\n2021-02-23 10:20:02.422932 W | etcdserver: read-only range request \"key:\\\"/registry/pods/nettest-2938/test-container-pod\\\" \" with result \"range_response_count:1 size:2417\" took too long (223.669519ms) to execute\n2021-02-23 10:20:02.423521 W | etcdserver: read-only range request \"key:\\\"/registry/pods/gc-3480/simpletest.rc-7cl5k\\\" \" with result \"range_response_count:1 size:1627\" took too long (223.832534ms) to execute\n2021-02-23 10:20:02.424418 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/conntrack-5237/default\\\" \" with result \"range_response_count:1 size:187\" took too long (102.238329ms) to execute\n2021-02-23 10:20:02.446967 W | etcdserver: read-only range request \"key:\\\"/registry/leases/kube-system/kube-controller-manager\\\" \" with result \"range_response_count:1 size:520\" took too long (103.5275ms) to execute\n2021-02-23 10:20:02.455707 W | etcdserver: read-only range request \"key:\\\"/registry/pods/gc-3480/simpletest.rc-9dskq\\\" \" with result \"range_response_count:1 size:1628\" took too long (188.176354ms) to execute\n2021-02-23 10:20:02.851700 I | etcdserver: start to snapshot (applied: 10001, lastsnap: 0)\n2021-02-23 10:20:02.865750 W | etcdserver: read-only range request \"key:\\\"/registry/secrets/dns-8294/default-token-kbqgx\\\" \" with result \"range_response_count:1 size:2628\" took too long (100.707225ms) to execute\n2021-02-23 10:20:02.892774 I | etcdserver: saved snapshot at index 10001\n2021-02-23 10:20:02.893593 I | etcdserver: compacted raft log at 5001\n2021-02-23 10:20:02.925568 W | etcdserver: read-only range request \"key:\\\"/registry/endpointslices/mount-propagation-2970/\\\" range_end:\\\"/registry/endpointslices/mount-propagation-29700\\\" \" with result \"range_response_count:0 size:5\" took too long (103.436354ms) to execute\n2021-02-23 10:20:02.927451 W | etcdserver: read-only range request \"key:\\\"/registry/pods/pods-3025/pod-exec-websocket-fec87082-c79f-43d8-b75a-06b7f37bb9a1\\\" \" with result \"range_response_count:1 size:2347\" took too long (105.761244ms) to execute\n2021-02-23 10:20:03.141602 W | etcdserver: read-only range request \"key:\\\"/registry/serviceaccounts/dns-8294/default\\\" \" with result \"range_response_count:1 size:211\" took too long (152.048123ms) to execute\n2021-02-23 10:20:06.700196 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:20:07.005447 W | etcdserver: read-only range request \"key:\\\"/registry/pods/nettest-6687/test-container-pod\\\" \" with result \"range_response_count:1 size:2417\" took too long (150.312027ms) to execute\n2021-02-23 10:20:16.699154 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:20:26.699053 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:20:36.714938 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:20:46.700270 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n2021-02-23 10:20:56.708938 I | etcdserver/api/etcdhttp: /health OK (status code 200)\n==== END logs for container etcd of pod kube-system/etcd-kind-control-plane ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-6nfp4 ====\nI0223 10:17:02.491169       1 main.go:229] probe TCP address kind-control-plane:6443\nI0223 10:17:02.495729       1 main.go:88] connected to apiserver: https://kind-control-plane:6443\nI0223 10:17:02.495764       1 main.go:93] hostIP = fc00:f853:ccd:e793::2\npodIP = fc00:f853:ccd:e793::2\nI0223 10:17:02.495882       1 main.go:102] setting mtu 1500 for CNI \nI0223 10:17:03.006704       1 main.go:185] handling current node\nI0223 10:17:03.091534       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:03.095585       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:03.096335       1 routes.go:47] Adding route {Ifindex: 0 Dst: fd00:10:244:2::/64 Src: <nil> Gw: fc00:f853:ccd:e793::3 Flags: [] Table: 0} \nI0223 10:17:03.096556       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:03.096580       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:03.098688       1 routes.go:47] Adding route {Ifindex: 0 Dst: fd00:10:244:1::/64 Src: <nil> Gw: fc00:f853:ccd:e793::4 Flags: [] Table: 0} \nI0223 10:17:13.192326       1 main.go:185] handling current node\nI0223 10:17:13.192360       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:13.192366       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:13.192470       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:13.192481       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:23.199351       1 main.go:185] handling current node\nI0223 10:17:23.199756       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:23.199946       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:23.200285       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:23.200456       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:33.208503       1 main.go:185] handling current node\nI0223 10:17:33.208871       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:33.209056       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:33.209368       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:33.209547       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:43.214837       1 main.go:185] handling current node\nI0223 10:17:43.214867       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:43.214872       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:43.214979       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:43.214989       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:53.220781       1 main.go:185] handling current node\nI0223 10:17:53.220820       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:53.220829       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:53.220961       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:53.220980       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:03.292018       1 main.go:185] handling current node\nI0223 10:18:03.292056       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:03.292066       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:03.292276       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:03.292297       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:13.299040       1 main.go:185] handling current node\nI0223 10:18:13.299081       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:13.299090       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:13.299220       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:13.299239       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:23.342155       1 main.go:185] handling current node\nI0223 10:18:23.342203       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:23.342209       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:23.342382       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:23.342400       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:33.349185       1 main.go:185] handling current node\nI0223 10:18:33.349234       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:33.349242       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:33.349385       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:33.349394       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:43.361078       1 main.go:185] handling current node\nI0223 10:18:43.361223       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:43.361253       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:43.361453       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:43.361504       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:53.374645       1 main.go:185] handling current node\nI0223 10:18:53.374775       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:53.374809       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:53.374989       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:53.375048       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:03.408528       1 main.go:185] handling current node\nI0223 10:19:03.408571       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:03.408579       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:03.408713       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:03.408721       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:13.420131       1 main.go:185] handling current node\nI0223 10:19:13.420299       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:13.420341       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:13.420493       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:13.420512       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:23.439256       1 main.go:185] handling current node\nI0223 10:19:23.439302       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:23.439310       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:23.439442       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:23.439449       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:33.446713       1 main.go:185] handling current node\nI0223 10:19:33.446751       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:33.446758       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:33.446888       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:33.446914       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:43.465718       1 main.go:185] handling current node\nI0223 10:19:43.465776       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:43.465786       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:43.465930       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:43.465948       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:53.494310       1 main.go:185] handling current node\nI0223 10:19:53.494626       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:53.494862       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:53.495394       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:53.495572       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:03.569357       1 main.go:185] handling current node\nI0223 10:20:03.569400       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:03.569407       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:03.569648       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:03.569677       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:13.587098       1 main.go:185] handling current node\nI0223 10:20:13.587707       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:13.587728       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:13.588015       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:13.588043       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:23.594654       1 main.go:185] handling current node\nI0223 10:20:23.594695       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:23.594703       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:23.594838       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:23.594851       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:33.601888       1 main.go:185] handling current node\nI0223 10:20:33.601934       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:33.601942       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:33.602328       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:33.602389       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:43.613691       1 main.go:185] handling current node\nI0223 10:20:43.613733       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:43.613742       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:43.613876       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:43.613891       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:53.618731       1 main.go:185] handling current node\nI0223 10:20:53.618769       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:53.618776       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:53.618899       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:53.618904       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:21:03.629733       1 main.go:185] handling current node\nI0223 10:21:03.629766       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:21:03.629775       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:21:03.629904       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:21:03.629913       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \n==== END logs for container kindnet-cni of pod kube-system/kindnet-6nfp4 ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-9cqk8 ====\nI0223 10:17:02.495242       1 main.go:229] probe TCP address kind-control-plane:6443\nI0223 10:17:02.598815       1 main.go:88] connected to apiserver: https://kind-control-plane:6443\nI0223 10:17:02.598855       1 main.go:93] hostIP = fc00:f853:ccd:e793::3\npodIP = fc00:f853:ccd:e793::3\nI0223 10:17:02.598992       1 main.go:102] setting mtu 1500 for CNI \nI0223 10:17:03.094083       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:03.094216       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:03.094655       1 routes.go:47] Adding route {Ifindex: 0 Dst: fd00:10:244::/64 Src: <nil> Gw: fc00:f853:ccd:e793::2 Flags: [] Table: 0} \nI0223 10:17:03.094756       1 main.go:185] handling current node\nI0223 10:17:03.100174       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:03.100272       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:03.100457       1 routes.go:47] Adding route {Ifindex: 0 Dst: fd00:10:244:1::/64 Src: <nil> Gw: fc00:f853:ccd:e793::4 Flags: [] Table: 0} \nI0223 10:17:13.107263       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:13.107298       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:13.107442       1 main.go:185] handling current node\nI0223 10:17:13.107460       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:13.107465       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:23.123343       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:23.123382       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:23.123617       1 main.go:185] handling current node\nI0223 10:17:23.123636       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:23.123642       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:33.155993       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:33.156040       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:33.156245       1 main.go:185] handling current node\nI0223 10:17:33.156262       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:33.156269       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:43.171214       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:43.171247       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:43.171421       1 main.go:185] handling current node\nI0223 10:17:43.171445       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:43.171450       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:17:53.187160       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:53.187198       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:53.187509       1 main.go:185] handling current node\nI0223 10:17:53.187528       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:17:53.187534       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:03.197299       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:03.197336       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:03.198338       1 main.go:185] handling current node\nI0223 10:18:03.198375       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:03.198383       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:13.204937       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:13.204965       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:13.208758       1 main.go:185] handling current node\nI0223 10:18:13.208802       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:13.208809       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:23.216967       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:23.217007       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:23.217354       1 main.go:185] handling current node\nI0223 10:18:23.217386       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:23.217391       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:33.223610       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:33.223644       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:33.224232       1 main.go:185] handling current node\nI0223 10:18:33.224256       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:33.224262       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:43.230189       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:43.230227       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:43.230632       1 main.go:185] handling current node\nI0223 10:18:43.230657       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:43.230661       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:18:53.239565       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:53.239597       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:53.240391       1 main.go:185] handling current node\nI0223 10:18:53.240415       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:18:53.240422       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:03.270759       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:03.270796       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:03.271279       1 main.go:185] handling current node\nI0223 10:19:03.271318       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:03.271325       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:13.283661       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:13.283702       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:13.285295       1 main.go:185] handling current node\nI0223 10:19:13.285411       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:13.285444       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:23.298675       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:23.298714       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:23.299344       1 main.go:185] handling current node\nI0223 10:19:23.299385       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:23.299392       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:33.305778       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:33.305809       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:33.306837       1 main.go:185] handling current node\nI0223 10:19:33.306867       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:33.306880       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:43.441863       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:43.441897       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:43.442404       1 main.go:185] handling current node\nI0223 10:19:43.442423       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:43.442430       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:19:53.455120       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:53.455152       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:53.458282       1 main.go:185] handling current node\nI0223 10:19:53.458320       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:19:53.458458       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:03.470988       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:03.471022       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:03.471872       1 main.go:185] handling current node\nI0223 10:20:03.471897       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:03.471905       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:13.480579       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:13.480641       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:13.481204       1 main.go:185] handling current node\nI0223 10:20:13.481249       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:13.481257       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:23.498735       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:23.498766       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:23.499172       1 main.go:185] handling current node\nI0223 10:20:23.499201       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:23.499207       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:33.506958       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:33.506999       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:33.507392       1 main.go:185] handling current node\nI0223 10:20:33.507426       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:33.507432       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:43.515702       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:43.515732       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:43.516242       1 main.go:185] handling current node\nI0223 10:20:43.516273       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:43.516277       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:20:53.526888       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:53.526930       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:53.527365       1 main.go:185] handling current node\nI0223 10:20:53.527386       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:20:53.527393       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \nI0223 10:21:03.537540       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:21:03.537580       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:21:03.538110       1 main.go:185] handling current node\nI0223 10:21:03.538141       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::4\nI0223 10:21:03.538149       1 main.go:197] Node kind-worker2 has CIDR fd00:10:244:1::/64 \n==== END logs for container kindnet-cni of pod kube-system/kindnet-9cqk8 ====\n==== START logs for container kindnet-cni of pod kube-system/kindnet-p2pbg ====\nI0223 10:17:02.491516       1 main.go:229] probe TCP address kind-control-plane:6443\nI0223 10:17:02.514139       1 main.go:88] connected to apiserver: https://kind-control-plane:6443\nI0223 10:17:02.591636       1 main.go:93] hostIP = fc00:f853:ccd:e793::4\npodIP = fc00:f853:ccd:e793::4\nI0223 10:17:02.591922       1 main.go:102] setting mtu 1500 for CNI \nI0223 10:17:03.196028       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:03.196065       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:03.196747       1 routes.go:47] Adding route {Ifindex: 0 Dst: fd00:10:244::/64 Src: <nil> Gw: fc00:f853:ccd:e793::2 Flags: [] Table: 0} \nI0223 10:17:03.197349       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:03.197372       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:03.197535       1 routes.go:47] Adding route {Ifindex: 0 Dst: fd00:10:244:2::/64 Src: <nil> Gw: fc00:f853:ccd:e793::3 Flags: [] Table: 0} \nI0223 10:17:03.197562       1 main.go:185] handling current node\nI0223 10:17:13.208791       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:13.208828       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:13.208952       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:13.208967       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:13.209036       1 main.go:185] handling current node\nI0223 10:17:23.235116       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:23.235150       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:23.235311       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:23.235318       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:23.235383       1 main.go:185] handling current node\nI0223 10:17:33.242449       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:33.242493       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:33.242658       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:33.242673       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:33.242737       1 main.go:185] handling current node\nI0223 10:17:43.248530       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:43.248568       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:43.248721       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:43.248727       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:43.248806       1 main.go:185] handling current node\nI0223 10:17:53.254627       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:17:53.254663       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:17:53.254971       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:17:53.254995       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:17:53.255194       1 main.go:185] handling current node\nI0223 10:18:03.262066       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:03.262099       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:03.263250       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:03.263316       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:03.263726       1 main.go:185] handling current node\nI0223 10:18:13.272969       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:13.273004       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:13.273393       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:13.273413       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:13.273647       1 main.go:185] handling current node\nI0223 10:18:23.283072       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:23.283106       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:23.283494       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:23.283504       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:23.283775       1 main.go:185] handling current node\nI0223 10:18:33.307621       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:33.307664       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:33.308034       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:33.308052       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:33.308482       1 main.go:185] handling current node\nI0223 10:18:43.321838       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:43.321867       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:43.322247       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:43.322254       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:43.322469       1 main.go:185] handling current node\nI0223 10:18:53.335734       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:18:53.335771       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:18:53.336584       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:18:53.336620       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:18:53.337372       1 main.go:185] handling current node\nI0223 10:19:03.352625       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:03.352671       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:03.353235       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:03.353250       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:03.353629       1 main.go:185] handling current node\nI0223 10:19:13.367396       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:13.367527       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:13.368400       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:13.368523       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:13.369212       1 main.go:185] handling current node\nI0223 10:19:23.378997       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:23.379031       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:23.379601       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:23.379614       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:23.380039       1 main.go:185] handling current node\nI0223 10:19:33.385829       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:33.385864       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:33.386415       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:33.386441       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:33.386879       1 main.go:185] handling current node\nI0223 10:19:43.446547       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:43.446579       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:43.447901       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:43.447926       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:43.455360       1 main.go:185] handling current node\nI0223 10:19:53.468278       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:19:53.468322       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:19:53.468762       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:19:53.468782       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:19:53.469052       1 main.go:185] handling current node\nI0223 10:20:03.535236       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:03.535267       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:03.537009       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:03.537027       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:03.537449       1 main.go:185] handling current node\nI0223 10:20:13.543281       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:13.543318       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:13.543810       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:13.543832       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:13.544190       1 main.go:185] handling current node\nI0223 10:20:23.553602       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:23.553643       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:23.554318       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:23.554342       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:23.554671       1 main.go:185] handling current node\nI0223 10:20:33.560773       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:33.560808       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:33.561225       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:33.561245       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:33.561504       1 main.go:185] handling current node\nI0223 10:20:43.567807       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:43.567841       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:43.568336       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:43.568376       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:43.568667       1 main.go:185] handling current node\nI0223 10:20:53.576688       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:20:53.576721       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:20:53.577155       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:20:53.577178       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:20:53.577522       1 main.go:185] handling current node\nI0223 10:21:03.587598       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::2\nI0223 10:21:03.587638       1 main.go:197] Node kind-control-plane has CIDR fd00:10:244::/64 \nI0223 10:21:03.588311       1 main.go:196] Handling node with IP: fc00:f853:ccd:e793::3\nI0223 10:21:03.588339       1 main.go:197] Node kind-worker has CIDR fd00:10:244:2::/64 \nI0223 10:21:03.588761       1 main.go:185] handling current node\n==== END logs for container kindnet-cni of pod kube-system/kindnet-p2pbg ====\n==== START logs for container kube-apiserver of pod kube-system/kube-apiserver-kind-control-plane ====\nI0223 10:20:55.218481       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/services-9882/pods/up-down-2-sbfvm\" latency=\"4.676991ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.4:54138\" resp=200\nI0223 10:20:55.295452       1 httplog.go:89] \"HTTP\" verb=\"PUT\" URI=\"/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kind-worker?timeout=10s\" latency=\"4.012899ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=200\nI0223 10:20:55.297684       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/nodes/kind-worker2?resourceVersion=0&timeout=10s\" latency=\"1.144369ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.4:54138\" resp=200\nI0223 10:20:55.324788       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/pods-9504/pods/pod-submit-remove-11249ab6-a80a-4b2f-ab03-ad36f5ff8136\" latency=\"2.902573ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=200\nI0223 10:20:55.337786       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/batch/v1/namespaces/cronjob-1912/jobs\" latency=\"2.345745ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-apps] CronJob should delete successful finished jobs with limit of one successful job\" srcIP=\"[fc00:f853:ccd:e793::1]:38426\" resp=200\nI0223 10:20:55.379980       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-4095/pods/httpd\" latency=\"2.294781ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-cli] Kubectl client Simple pod should return command exit codes\" srcIP=\"[fc00:f853:ccd:e793::1]:38468\" resp=200\nI0223 10:20:55.407974       1 httplog.go:89] \"HTTP\" verb=\"PATCH\" URI=\"/api/v1/namespaces/port-forwarding-813/events/pfpod.1666593eb82b1d3c\" latency=\"5.379373ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.4:54138\" resp=200\nI0223 10:20:55.419393       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/nettest-4715/pods/netserver-1\" latency=\"4.062005ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector\" srcIP=\"[fc00:f853:ccd:e793::1]:38460\" resp=200\nI0223 10:20:55.419445       1 httplog.go:89] \"HTTP\" verb=\"PATCH\" URI=\"/api/v1/namespaces/services-9882/pods/up-down-2-sbfvm/status\" latency=\"5.580332ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.4:54138\" resp=200\nI0223 10:20:55.479694       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/nodes/kind-worker?resourceVersion=0&timeout=10s\" latency=\"1.394416ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=200\nI0223 10:20:55.523722       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/container-probe-6885/pods/test-webserver-f4ce3aee-1400-4a84-bf3b-7e5ceceebec9\" latency=\"4.917065ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:46022\" resp=200\nI0223 10:20:55.528844       1 httplog.go:89] \"HTTP\" verb=\"PATCH\" URI=\"/api/v1/namespaces/pods-9504/pods/pod-submit-remove-11249ab6-a80a-4b2f-ab03-ad36f5ff8136/status\" latency=\"6.987405ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=200\nI0223 10:20:55.555406       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_udp@dns-test-service\" latency=\"3.438094ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.559288       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_tcp@dns-test-service\" latency=\"2.661869ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.563296       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_udp@dns-test-service.dns-1862\" latency=\"2.966953ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.567708       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_tcp@dns-test-service.dns-1862\" latency=\"3.302861ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.571668       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_udp@dns-test-service.dns-1862.svc\" latency=\"2.722111ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.576206       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_tcp@dns-test-service.dns-1862.svc\" latency=\"3.260056ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.580210       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_udp@_http._tcp.dns-test-service.dns-1862.svc\" latency=\"2.611943ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.583760       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_tcp@_http._tcp.dns-test-service.dns-1862.svc\" latency=\"2.497596ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.589549       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s\" latency=\"4.056013ms\" userAgent=\"kube-scheduler/v1.21.0 (linux/amd64) kubernetes/c7e85d3/leader-election\" srcIP=\"[fc00:f853:ccd:e793::2]:45986\" resp=200\nI0223 10:20:55.591301       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_udp@_http._tcp.test-service-2.dns-1862.svc\" latency=\"6.557544ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.593996       1 httplog.go:89] \"HTTP\" verb=\"PUT\" URI=\"/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s\" latency=\"3.430738ms\" userAgent=\"kube-scheduler/v1.21.0 (linux/amd64) kubernetes/c7e85d3/leader-election\" srcIP=\"[fc00:f853:ccd:e793::2]:45986\" resp=200\nI0223 10:20:55.595668       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_tcp@_http._tcp.test-service-2.dns-1862.svc\" latency=\"3.037132ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.598966       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_udp@PodARecord\" latency=\"2.353083ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.602759       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/wheezy_tcp@PodARecord\" latency=\"2.972541ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.606453       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/fd00:10:96::83ca_udp@PTR\" latency=\"2.805956ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.610127       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/fd00:10:96::83ca_tcp@PTR\" latency=\"2.733006ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.614661       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_udp@dns-test-service\" latency=\"3.343058ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.617375       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-4095/pods/httpd\" latency=\"3.217995ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.4:54138\" resp=200\nI0223 10:20:55.621021       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_tcp@dns-test-service\" latency=\"5.327804ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.630170       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_udp@dns-test-service.dns-1862\" latency=\"7.784666ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.634460       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_tcp@dns-test-service.dns-1862\" latency=\"3.206362ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.638522       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_udp@dns-test-service.dns-1862.svc\" latency=\"2.894877ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.643504       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_tcp@dns-test-service.dns-1862.svc\" latency=\"4.08152ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.647411       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_udp@_http._tcp.dns-test-service.dns-1862.svc\" latency=\"2.649374ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.650549       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/pods-9504/pods/pod-submit-remove-11249ab6-a80a-4b2f-ab03-ad36f5ff8136\" latency=\"1.752008ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38424\" resp=200\nI0223 10:20:55.651085       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_tcp@_http._tcp.dns-test-service.dns-1862.svc\" latency=\"2.765174ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=404\nI0223 10:20:55.654700       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_udp@_http._tcp.test-service-2.dns-1862.svc\" latency=\"2.667527ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.657934       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_tcp@_http._tcp.test-service-2.dns-1862.svc\" latency=\"2.502416ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.661618       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_udp@PodARecord\" latency=\"2.808928ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.667316       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/jessie_tcp@PodARecord\" latency=\"4.765282ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.669984       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/container-probe-8591/pods/busybox-b6b630ae-a5f9-4b7d-ab3c-874b380f2f61\" latency=\"2.640351ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [k8s.io] Probing container should be restarted with an exec liveness probe with timeout [NodeConformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:45448\" resp=200\nI0223 10:20:55.671216       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/fd00:10:96::83ca_udp@PTR\" latency=\"2.643138ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.675046       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/dns-1862/pods/dns-test-79e81f49-fd54-4269-895c-d3ab3e53f095/proxy/results/fd00:10:96::83ca_tcp@PTR\" latency=\"2.741552ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38478\" resp=200\nI0223 10:20:55.697076       1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/api/v1/namespaces/security-context-9202/events\" latency=\"4.05759ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=201\nI0223 10:20:55.731605       1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/api/v1/namespaces/services-9882/serviceaccounts/default/token\" latency=\"9.576151ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=201\nI0223 10:20:55.737482       1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/api/v1/namespaces/security-context-9202/events\" latency=\"5.076342ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=201\nI0223 10:20:55.737989       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/nodes/kind-worker\" latency=\"4.563316ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently\" srcIP=\"[fc00:f853:ccd:e793::1]:38470\" resp=200\nI0223 10:20:55.740360       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/nettest-8715/pods/netserver-0\" latency=\"2.203033ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp\" srcIP=\"[fc00:f853:ccd:e793::1]:38454\" resp=200\nI0223 10:20:55.750629       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/services-4733/pods/hairpin\" latency=\"2.398158ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] Services should allow pods to hairpin back to themselves through services\" srcIP=\"[fc00:f853:ccd:e793::1]:45274\" resp=200\nI0223 10:20:55.819242       1 httplog.go:89] \"HTTP\" verb=\"PATCH\" URI=\"/api/v1/namespaces/kubectl-4095/pods/httpd/status\" latency=\"5.526482ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.4:54138\" resp=200\nI0223 10:20:55.883809       1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/api/v1/namespaces/security-context-9202/events\" latency=\"3.396845ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=201\nI0223 10:20:55.919407       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s\" latency=\"3.025269ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/leader-election\" srcIP=\"[fc00:f853:ccd:e793::2]:45914\" resp=200\nI0223 10:20:55.923382       1 httplog.go:89] \"HTTP\" verb=\"PUT\" URI=\"/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s\" latency=\"2.852215ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/leader-election\" srcIP=\"[fc00:f853:ccd:e793::2]:45914\" resp=200\nI0223 10:20:55.929248       1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/api/v1/namespaces/containers-5686/serviceaccounts/default/token\" latency=\"7.329315ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=201\nI0223 10:20:55.983376       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/services-8790/pods/execpod-affinityhbr7k\" latency=\"13.389694ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38456\" resp=200\nI0223 10:20:55.989405       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/nodes?fieldSelector=spec.unschedulable%3Dfalse\" latency=\"4.624653ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38456\" resp=200\nI0223 10:20:55.991359       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/discovery.k8s.io/v1beta1?timeout=32s\" latency=\"625.928µs\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38456\" resp=200\nI0223 10:20:55.992181       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/services-8790/endpoints?fieldSelector=metadata.name%3Daffinity-nodeport-transition&limit=500&resourceVersion=0\" latency=\"1.002802ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38456\" resp=200\nI0223 10:20:55.993762       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/discovery.k8s.io/v1beta1/namespaces/services-8790/endpointslices?labelSelector=kubernetes.io%2Fservice-name%3Daffinity-nodeport-transition&limit=500&resourceVersion=0\" latency=\"827.918µs\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38456\" resp=200\nI0223 10:20:55.993799       1 get.go:260] \"Starting watch\" path=\"/api/v1/namespaces/services-8790/endpoints\" resourceVersion=\"12343\" labels=\"\" fields=\"metadata.name=affinity-nodeport-transition\" timeout=\"8m6s\"\nI0223 10:20:55.995629       1 get.go:260] \"Starting watch\" path=\"/apis/discovery.k8s.io/v1beta1/namespaces/services-8790/endpointslices\" resourceVersion=\"12298\" labels=\"kubernetes.io/service-name=affinity-nodeport-transition\" fields=\"\" timeout=\"9m41s\"\nI0223 10:20:56.016446       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/nettest-4715/pods/netserver-1\" latency=\"2.813826ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.4:54138\" resp=200\nI0223 10:20:56.061227       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061473       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061533       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061549       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061555       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061561       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061566       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061571       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061577       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061591       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061610       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061617       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061624       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061631       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061640       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061653       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061660       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061713       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061746       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061752       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061758       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061762       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061767       1 shared_informer.go:270] caches populated\nI0223 10:20:56.061972       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/readyz\" latency=\"4.924724ms\" userAgent=\"kube-probe/1.21+\" srcIP=\"[fc00:f853:ccd:e793::2]:57612\" resp=200\nI0223 10:20:56.106394       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-540\" latency=\"2.862354ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.106484       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/nettest-2938\" latency=\"4.73112ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.108047       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api?timeout=32s\" latency=\"455.553µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.108055       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api?timeout=32s\" latency=\"469.986µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.109492       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis?timeout=32s\" latency=\"464.868µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.109678       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis?timeout=32s\" latency=\"403.708µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.111689       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1?timeout=32s\" latency=\"687.322µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.111764       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/extensions/v1beta1?timeout=32s\" latency=\"304.002µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.111844       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1?timeout=32s\" latency=\"582.286µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.114255       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s\" latency=\"3.199714ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.115254       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiregistration.k8s.io/v1?timeout=32s\" latency=\"3.373421ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.115442       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1beta1?timeout=32s\" latency=\"4.187988ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.115805       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiregistration.k8s.io/v1?timeout=32s\" latency=\"3.874557ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.116061       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s\" latency=\"5.016339ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.116592       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiregistration.k8s.io/v1beta1?timeout=32s\" latency=\"4.529434ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.116891       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authentication.k8s.io/v1beta1?timeout=32s\" latency=\"312.734µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.116954       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/policy/v1beta1?timeout=32s\" latency=\"4.752346ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.117253       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apps/v1?timeout=32s\" latency=\"4.897133ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.117548       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/rbac.authorization.k8s.io/v1?timeout=32s\" latency=\"4.945161ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.117599       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authorization.k8s.io/v1?timeout=32s\" latency=\"214.327µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.117903       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/events.k8s.io/v1?timeout=32s\" latency=\"5.176375ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.118091       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s\" latency=\"5.268964ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.118419       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/events.k8s.io/v1beta1?timeout=32s\" latency=\"5.525063ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.118708       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/storage.k8s.io/v1?timeout=32s\" latency=\"5.757822ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.119289       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/admissionregistration.k8s.io/v1beta1?timeout=32s\" latency=\"489.626µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.122118       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authentication.k8s.io/v1?timeout=32s\" latency=\"9.10642ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.122363       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/admissionregistration.k8s.io/v1?timeout=32s\" latency=\"3.569359ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.122559       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/admissionregistration.k8s.io/v1beta1?timeout=32s\" latency=\"239.261µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.122585       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1?timeout=32s\" latency=\"644.567µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.122943       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiregistration.k8s.io/v1beta1?timeout=32s\" latency=\"3.979507ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.123087       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authorization.k8s.io/v1beta1?timeout=32s\" latency=\"4.19058ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.123527       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1beta1?timeout=32s\" latency=\"709.42µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.124227       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/certificates.k8s.io/v1beta1?timeout=32s\" latency=\"881.092µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.124401       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/extensions/v1beta1?timeout=32s\" latency=\"1.702509ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.124493       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/certificates.k8s.io/v1?timeout=32s\" latency=\"3.727439ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.124606       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/node.k8s.io/v1beta1?timeout=32s\" latency=\"260.854µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.124698       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1?timeout=32s\" latency=\"3.877957ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.124712       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/scheduling.k8s.io/v1beta1?timeout=32s\" latency=\"1.57252ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.124757       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1beta1?timeout=32s\" latency=\"841.704µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.124765       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1?timeout=32s\" latency=\"1.76938ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.124923       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v1?timeout=32s\" latency=\"5.108218ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125040       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/batch/v1beta1?timeout=32s\" latency=\"3.366791ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125154       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s\" latency=\"266.716µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125186       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/scheduling.k8s.io/v1?timeout=32s\" latency=\"430.76µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125226       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/events.k8s.io/v1?timeout=32s\" latency=\"4.976146ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125420       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/admissionregistration.k8s.io/v1?timeout=32s\" latency=\"3.715214ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125502       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiextensions.k8s.io/v1?timeout=32s\" latency=\"5.39345ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125557       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/policy/v1beta1?timeout=32s\" latency=\"297.097µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125566       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiextensions.k8s.io/v1?timeout=32s\" latency=\"3.480903ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125584       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiextensions.k8s.io/v1beta1?timeout=32s\" latency=\"3.379877ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125629       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/storage.k8s.io/v1beta1?timeout=32s\" latency=\"8.574604ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125809       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v2beta2?timeout=32s\" latency=\"5.623743ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.126029       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/container-probe-8352/pods/startup-066b3729-e801-4590-b7a9-d90f3892dcd9\" latency=\"4.209696ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.3:48596\" resp=200\nI0223 10:20:56.126058       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/node.k8s.io/v1?timeout=32s\" latency=\"2.306283ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125809       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/certificates.k8s.io/v1?timeout=32s\" latency=\"3.983926ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125935       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/rbac.authorization.k8s.io/v1?timeout=32s\" latency=\"2.393658ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.125943       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/discovery.k8s.io/v1beta1?timeout=32s\" latency=\"3.944566ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.126293       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apps/v1?timeout=32s\" latency=\"6.359571ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.126530       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/scheduling.k8s.io/v1?timeout=32s\" latency=\"6.089535ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.126891       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/batch/v1beta1?timeout=32s\" latency=\"6.374452ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.127122       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/scheduling.k8s.io/v1beta1?timeout=32s\" latency=\"6.421717ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.126912       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authentication.k8s.io/v1?timeout=32s\" latency=\"6.315431ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.127213       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authentication.k8s.io/v1beta1?timeout=32s\" latency=\"6.291097ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.127417       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/certificates.k8s.io/v1beta1?timeout=32s\" latency=\"6.53573ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.127574       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1beta1?timeout=32s\" latency=\"6.559296ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.127843       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authorization.k8s.io/v1?timeout=32s\" latency=\"6.792366ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.127981       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1?timeout=32s\" latency=\"6.856342ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.128507       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/events.k8s.io/v1beta1?timeout=32s\" latency=\"7.891676ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.128578       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/node.k8s.io/v1?timeout=32s\" latency=\"7.357368ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.128701       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/batch/v1?timeout=32s\" latency=\"7.070934ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.128776       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/storage.k8s.io/v1beta1?timeout=32s\" latency=\"7.276285ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.128787       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v2beta2?timeout=32s\" latency=\"7.26638ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.128805       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v2beta1?timeout=32s\" latency=\"7.438697ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.129001       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/storage.k8s.io/v1?timeout=32s\" latency=\"7.580213ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.129013       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/node.k8s.io/v1beta1?timeout=32s\" latency=\"7.688522ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.129030       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v2beta1?timeout=32s\" latency=\"9.055747ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.129045       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v1?timeout=32s\" latency=\"7.776065ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.129069       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authorization.k8s.io/v1beta1?timeout=32s\" latency=\"7.890274ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.129246       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/discovery.k8s.io/v1beta1?timeout=32s\" latency=\"3.391958ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.129250       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/batch/v1?timeout=32s\" latency=\"8.846639ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.130501       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiextensions.k8s.io/v1beta1?timeout=32s\" latency=\"10.193008ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.135674       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/kubectl-540/serviceaccounts\" latency=\"2.964888ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.138417       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-540/serviceaccounts\" latency=\"1.890853ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.140070       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/nettest-2938/serviceaccounts\" latency=\"8.397991ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.146410       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/kubectl-540/limitranges\" latency=\"2.077845ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.146488       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/nettest-2938/serviceaccounts\" latency=\"1.961568ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.148686       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/nettest-2938/secrets/default-token-w5652\" latency=\"4.610267ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/tokens-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:45914\" resp=200\nI0223 10:20:56.151961       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/nettest-2938/resourcequotas\" latency=\"3.850973ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.152026       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-540/limitranges\" latency=\"4.535203ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.155105       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/nettest-2938/resourcequotas\" latency=\"2.277401ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.155998       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/batch/v1/namespaces/kubectl-540/jobs\" latency=\"3.11667ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.158997       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/batch/v1/namespaces/kubectl-540/jobs\" latency=\"1.625009ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.159434       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/apps/v1/namespaces/nettest-2938/daemonsets\" latency=\"2.437636ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.161946       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/batch/v1beta1/namespaces/kubectl-540/cronjobs\" latency=\"1.621808ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.162009       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apps/v1/namespaces/nettest-2938/daemonsets\" latency=\"1.830176ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.164708       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/batch/v1beta1/namespaces/kubectl-540/cronjobs\" latency=\"1.874065ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.166145       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/networking.k8s.io/v1/namespaces/nettest-2938/networkpolicies\" latency=\"3.209914ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.167167       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/policy/v1beta1/namespaces/kubectl-540/poddisruptionbudgets\" latency=\"1.661374ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.168592       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1/namespaces/nettest-2938/networkpolicies\" latency=\"1.807209ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.169531       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/policy/v1beta1/namespaces/kubectl-540/poddisruptionbudgets\" latency=\"1.60694ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.170784       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/tables-2885\" latency=\"1.816973ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.171539       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/coordination.k8s.io/v1/namespaces/nettest-2938/leases\" latency=\"2.232157ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.172221       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/kubectl-540/resourcequotas\" latency=\"1.824103ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.173046       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api?timeout=32s\" latency=\"529.282µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.174006       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis?timeout=32s\" latency=\"383.438µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.174146       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1/namespaces/nettest-2938/leases\" latency=\"1.69728ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.174873       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-540/resourcequotas\" latency=\"1.472701ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.175383       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/flowcontrol.apiserver.k8s.io/v1beta1?timeout=32s\" latency=\"581.504µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.175481       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apps/v1?timeout=32s\" latency=\"350.3µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.175542       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/events.k8s.io/v1?timeout=32s\" latency=\"376.417µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.175676       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/events.k8s.io/v1beta1?timeout=32s\" latency=\"324.788µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.175999       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1?timeout=32s\" latency=\"1.19057ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.176220       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authentication.k8s.io/v1beta1?timeout=32s\" latency=\"302.352µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.176301       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authentication.k8s.io/v1?timeout=32s\" latency=\"645.505µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.176372       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authorization.k8s.io/v1?timeout=32s\" latency=\"503.081µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.176472       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v1?timeout=32s\" latency=\"247.272µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.176449       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/authorization.k8s.io/v1beta1?timeout=32s\" latency=\"419.821µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.176651       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiregistration.k8s.io/v1beta1?timeout=32s\" latency=\"1.637208ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.176766       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v2beta1?timeout=32s\" latency=\"320.667µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.176880       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiregistration.k8s.io/v1?timeout=32s\" latency=\"1.961634ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.176959       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/batch/v1?timeout=32s\" latency=\"274.605µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.177057       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/batch/v1beta1?timeout=32s\" latency=\"267.198µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.177291       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/certificates.k8s.io/v1beta1?timeout=32s\" latency=\"299.722µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.177454       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1beta1?timeout=32s\" latency=\"234.409µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.177519       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/certificates.k8s.io/v1?timeout=32s\" latency=\"630.584µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.177590       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/policy/v1beta1?timeout=32s\" latency=\"241.441µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.177663       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v2beta2?timeout=32s\" latency=\"1.05345ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.177716       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/extensions/v1beta1?timeout=32s\" latency=\"397.735µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.177726       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/rbac.authorization.k8s.io/v1?timeout=32s\" latency=\"290.809µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.177884       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s\" latency=\"337.509µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.178184       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/storage.k8s.io/v1?timeout=32s\" latency=\"416.205µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.178297       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/admissionregistration.k8s.io/v1beta1?timeout=32s\" latency=\"285.444µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.178463       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/storage.k8s.io/v1beta1?timeout=32s\" latency=\"637.326µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.178499       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiextensions.k8s.io/v1?timeout=32s\" latency=\"283.417µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.178507       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1?timeout=32s\" latency=\"1.437421ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.178536       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apiextensions.k8s.io/v1beta1?timeout=32s\" latency=\"211.738µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.178806       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/scheduling.k8s.io/v1?timeout=32s\" latency=\"236.44µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.179045       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1?timeout=32s\" latency=\"344.755µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.179192       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/scheduling.k8s.io/v1beta1?timeout=32s\" latency=\"505.409µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.179394       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/coordination.k8s.io/v1beta1?timeout=32s\" latency=\"561.854µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.179588       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/admissionregistration.k8s.io/v1?timeout=32s\" latency=\"246.97µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.179751       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/node.k8s.io/v1?timeout=32s\" latency=\"844.4µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.179918       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/node.k8s.io/v1beta1?timeout=32s\" latency=\"977.316µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.180059       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/discovery.k8s.io/v1beta1?timeout=32s\" latency=\"1.021807ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.180835       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/nettest-2938/persistentvolumeclaims\" latency=\"1.648843ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.181979       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/kubectl-540/secrets\" latency=\"2.727218ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.183490       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/tables-2885/resourcequotas\" latency=\"1.610187ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.183517       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/nettest-2938/persistentvolumeclaims\" latency=\"1.296331ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.183787       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-540/secrets\" latency=\"1.242322ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.185580       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/tables-2885/resourcequotas\" latency=\"1.332994ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.186657       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-540/pods\" latency=\"1.925318ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.187967       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/tables-2885/limitranges\" latency=\"1.812219ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nE0223 10:20:56.188543       1 status.go:71] apiserver received an error that is not an metav1.Status: &fmt.wrapError{msg:\"error trying to reach service: dial tcp [fc00:f853:ccd:e793::2]:10252: connect: connection refused\", err:(*net.OpError)(0xc007486d20)}: error trying to reach service: dial tcp [fc00:f853:ccd:e793::2]:10252: connect: connection refused\nI0223 10:20:56.189301       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kube-system/pods/kube-controller-manager-kind-control-plane:10252/proxy/metrics\" latency=\"2.090152ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38462\" resp=500 statusStack=\"\\ngoroutine 379095 [running]:\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).recordStatus(0xc006c968c0, 0x1f4)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/httplog/httplog.go:237 +0xcf\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.(*respLogger).WriteHeader(0xc006c968c0, 0x1f4)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/httplog/httplog.go:216 +0x35\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0xc010b2bc50, 0x1f4)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:572 +0x45\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc00d2891a0, 0xc0006e90e0, 0x92, 0x99, 0x0, 0x0, 0x0)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:229 +0x2fd\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/protobuf.(*Serializer).doEncode(0xc0002e8200, 0x55afaa0, 0xc0051af040, 0x559ef00, 0xc00d2891a0, 0x0, 0x4c6e25b)\\n\\tstaging/src/k8s.io/apimachinery/pkg/runtime/serializer/protobuf/protobuf.go:212 +0x5e5\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/protobuf.(*Serializer).Encode(0xc0002e8200, 0x55afaa0, 0xc0051af040, 0x559ef00, 0xc00d2891a0, 0x3d9f381, 0x6)\\n\\tstaging/src/k8s.io/apimachinery/pkg/runtime/serializer/protobuf/protobuf.go:169 +0x147\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc0051af0e0, 0x55afaa0, 0xc0051af040, 0x559ef00, 0xc00d2891a0, 0x0, 0x0)\\n\\tstaging/src/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:228 +0x396\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc0051af0e0, 0x55afaa0, 0xc0051af040, 0x559ef00, 0xc00d2891a0, 0xc000410820, 0xc000001b00)\\n\\tstaging/src/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:184 +0x170\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject(0x4ce795a, 0x23, 0x7fb2b02228c0, 0xc0051af0e0, 0x5604fe0, 0xc016f70bd0, 0xc011258f00, 0x1f4, 0x55afaa0, 0xc0051af040)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:107 +0x45a\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated(0x5608a60, 0xc0006490c0, 0x5608da0, 0x77ec770, 0x0, 0x0, 0x4c6e25b, 0x2, 0x5604fe0, 0xc016f70bd0, ...)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:278 +0x5cd\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated(0x5598aa0, 0xc0101818e0, 0x5608a60, 0xc0006490c0, 0x0, 0x0, 0x4c6e25b, 0x2, 0x5604fe0, 0xc016f70bd0, ...)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:297 +0x16f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.(*RequestScope).err(...)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:106\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.(*responder).Error(0xc010672800, 0x5598aa0, 0xc0101818e0)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:225 +0xd6\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/proxy.simpleResponder.Error(...)\\n\\tstaging/src/k8s.io/apimachinery/pkg/util/proxy/upgradeaware.go:108\\nnet/http/httputil.(*ReverseProxy).ServeHTTP(0xc007486c30, 0x5604fe0, 0xc016f70bd0, 0xc01346ac90)\\n\\tGOROOT/src/net/http/httputil/reverseproxy.go:290 +0xfd5\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/proxy.(*UpgradeAwareHandler).ServeHTTP(0xc00d288f00, 0x5604fe0, 0xc016f70bd0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apimachinery/pkg/util/proxy/upgradeaware.go:241 +0x5fb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1.1()\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:208 +0x297\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.RecordLongRunning(0xc011258f00, 0xc00b750c60, 0x4c78050, 0x9, 0xc00c317440)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:393 +0x3bc\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers.ConnectResource.func1(0x5604fe0, 0xc016f70bd0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/handlers/rest.go:202 +0x472\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints.restfulConnectResource.func1(0xc010b2bb30, 0xc006c96930)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/installer.go:1230 +0x99\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc010b2bb30, 0xc006c96930)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:450 +0x2d5\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc000dfa1b0, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tvendor/github.com/emicklei/go-restful/container.go:288 +0xa84\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)\\n\\tvendor/github.com/emicklei/go-restful/container.go:199\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4c854f8, 0xe, 0xc000dfa1b0, 0xc000c501c0, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/handler.go:146 +0x5de\\nk8s.io/kubernetes/vendor/k8s.io/kube-aggregator/pkg/apiserver.(*proxyHandler).ServeHTTP(0xc005eb4dc0, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/kube-aggregator/pkg/apiserver/handler_proxy.go:121 +0x183\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*pathHandler).ServeHTTP(0xc007614200, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:248 +0x47a\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/mux.(*PathRecorderMux).ServeHTTP(0xc007e7b180, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/mux/pathrecorder.go:234 +0x8c\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP(0x4c88a4c, 0xf, 0xc004a7b200, 0xc007e7b180, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/handler.go:154 +0x87f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:95 +0x193\\nnet/http.HandlerFunc.ServeHTTP(0xc007eafa70, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x5ba\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea0bc0, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:71 +0x186\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea0c00, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:95 +0x193\\nnet/http.HandlerFunc.ServeHTTP(0xc007eafaa0, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:90 +0x262\\nnet/http.HandlerFunc.ServeHTTP(0xc007eafad0, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:71 +0x186\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea0c40, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:95 +0x193\\nnet/http.HandlerFunc.ServeHTTP(0xc007eafb00, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x243d\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea0c80, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:71 +0x186\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea0cc0, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:95 +0x193\\nnet/http.HandlerFunc.ServeHTTP(0xc007eafb30, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:71 +0x186\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea0d00, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1(0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:95 +0x193\\nnet/http.HandlerFunc.ServeHTTP(0xc007eafb90, 0x5605160, 0xc006c968c0, 0xc011258f00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthentication.func1(0x5605160, 0xc006c968c0, 0xc011258d00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:70 +0x6d4\\nnet/http.HandlerFunc.ServeHTTP(0xc007e80910, 0x5605160, 0xc006c968c0, 0xc011258d00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1(0x5605160, 0xc006c968c0, 0xc011258c00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x38a\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea0d40, 0x5605160, 0xc006c968c0, 0xc011258c00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc007ea57a0, 0x5605160, 0xc006c968c0, 0xc011258c00)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/filters/timeout.go:84 +0x476\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1(0x5605160, 0xc006c968c0, 0xc011258c00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:66 +0x946\\nnet/http.HandlerFunc.ServeHTTP(0xc007e7b1f0, 0x5605160, 0xc006c968c0, 0xc011258c00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1(0x5605160, 0xc006c968c0, 0xc011258c00)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/filters/waitgroup.go:59 +0x137\\nnet/http.HandlerFunc.ServeHTTP(0xc007eafbc0, 0x5605160, 0xc006c968c0, 0xc011258c00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1(0x5605160, 0xc006c968c0, 0xc011258b00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x269\\nnet/http.HandlerFunc.ServeHTTP(0xc007eafbf0, 0x5605160, 0xc006c968c0, 0xc011258b00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1(0x5605160, 0xc006c968c0, 0xc011258a00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filters/warning.go:35 +0x1a7\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea57c0, 0x5605160, 0xc006c968c0, 0xc011258a00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1(0x5605160, 0xc006c968c0, 0xc011258a00)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0xa8\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea57e0, 0x5605160, 0xc006c968c0, 0xc011258a00)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestReceivedTimestampWithClock.func1(0x5605160, 0xc006c968c0, 0xc011258900)\\n\\tstaging/src/k8s.io/apiserver/pkg/endpoints/filters/request_received_time.go:38 +0x1a7\\nnet/http.HandlerFunc.ServeHTTP(0xc007eafc20, 0x5605160, 0xc006c968c0, 0xc011258900)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x55f7060, 0xc016f70bc0, 0xc011258800)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/httplog/httplog.go:91 +0x322\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea5820, 0x55f7060, 0xc016f70bc0, 0xc011258800)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1(0x55f7060, 0xc016f70bc0, 0xc011258800)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/filters/wrap.go:70 +0xe6\\nnet/http.HandlerFunc.ServeHTTP(0xc007ea5840, 0x55f7060, 0xc016f70bc0, 0xc011258800)\\n\\tGOROOT/src/net/http/server.go:2042 +0x44\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc007eafc50, 0x55f7060, 0xc016f70bc0, 0xc011258800)\\n\\tstaging/src/k8s.io/apiserver/pkg/server/handler.go:189 +0x51\\nnet/http.serverHandler.ServeHTTP(0xc004ee3ea0, 0x55f7060, 0xc016f70bc0, 0xc011258800)\\n\\tGOROOT/src/net/http/server.go:2843 +0xa3\\nnet/http.initALPNRequest.ServeHTTP(0x560b3a0, 0xc008d0e720, 0xc0056bae00, 0xc004ee3ea0, 0x55f7060, 0xc016f70bc0, 0xc011258800)\\n\\tGOROOT/src/net/http/server.go:3415 +0x8d\\nk8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).runHandler(0xc010c25c80, 0xc016f70bc0, 0xc011258800, 0xc0106725a0)\\n\\tvendor/golang.org/x/net/http2/server.go:2152 +0x8b\\ncreated by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*serverConn).processHeaders\\n\\tvendor/golang.org/x/net/http2/server.go:1882 +0x505\\n\" addedInfo=\"\\nlogging error output: \\\"k8s\\\\x00\\\\n\\\\f\\\\n\\\\x02v1\\\\x12\\\\x06Status\\\\x12z\\\\n\\\\x06\\\\n\\\\x00\\\\x12\\\\x00\\\\x1a\\\\x00\\\\x12\\\\aFailure\\\\x1aberror trying to reach service: dial tcp [fc00:f853:ccd:e793::2]:10252: connect: connection refused\\\\\\\"\\\\x000\\\\xf4\\\\x03\\\\x1a\\\\x00\\\\\\\"\\\\x00\\\"\\n\"\nI0223 10:20:56.192233       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/tables-2885/limitranges\" latency=\"3.474484ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.192350       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/kubectl-540/pods\" latency=\"4.715895ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.194611       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-540/pods\" latency=\"1.621206ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.197434       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/apps/v1/namespaces/kubectl-540/controllerrevisions\" latency=\"1.646809ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.197973       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/tables-2885/serviceaccounts\" latency=\"4.774305ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.202492       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apps/v1/namespaces/kubectl-540/controllerrevisions\" latency=\"4.384241ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.203773       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/tables-2885/serviceaccounts\" latency=\"4.493007ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.204502       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/containers-5686/pods/client-containers-096e310e-99ca-499e-ba15-6637e24c422d\" latency=\"1.901407ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38464\" resp=200\nI0223 10:20:56.205288       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/events.k8s.io/v1/namespaces/kubectl-540/events\" latency=\"1.965886ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.205406       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/resourcequota-1218/secrets\" latency=\"2.495538ms\" userAgent=\"e2e.test/v1.21.0 (linux/amd64) kubernetes/c7e85d3 -- [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]\" srcIP=\"[fc00:f853:ccd:e793::1]:38418\" resp=200\nI0223 10:20:56.205789       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/tables-2885/secrets/default-token-dptt8\" latency=\"7.019645ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/tokens-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:45914\" resp=200\nI0223 10:20:56.206595       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/apps/v1/namespaces/tables-2885/statefulsets\" latency=\"2.177428ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.208732       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/events.k8s.io/v1/namespaces/kubectl-540/events\" latency=\"2.4406ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.209259       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/apps/v1/namespaces/tables-2885/statefulsets\" latency=\"1.616634ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.211129       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/autoscaling/v1/namespaces/kubectl-540/horizontalpodautoscalers\" latency=\"1.588337ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.211751       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/networking.k8s.io/v1/namespaces/tables-2885/networkpolicies\" latency=\"1.433627ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.213983       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/autoscaling/v1/namespaces/kubectl-540/horizontalpodautoscalers\" latency=\"2.000731ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.214043       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1/namespaces/tables-2885/networkpolicies\" latency=\"1.239648ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.218463       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/networking.k8s.io/v1/namespaces/tables-2885/ingresses\" latency=\"3.713006ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.219659       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/api/v1/namespaces/kubectl-540/events\" latency=\"3.826824ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.222430       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/crd-publish-openapi-test-foo.example.com/v1/e2e-test-crd-publish-openapi-3851-crds?resourceVersion=9619\" latency=\"629.826µs\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/metadata-informers\" srcIP=\"[fc00:f853:ccd:e793::2]:45914\" resp=404\nI0223 10:20:56.225568       1 httplog.go:89] \"HTTP\" verb=\"POST\" URI=\"/api/v1/namespaces/services-9882/events\" latency=\"9.582048ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.4:54138\" resp=201\nI0223 10:20:56.225835       1 httplog.go:89] \"HTTP\" verb=\"PATCH\" URI=\"/api/v1/namespaces/nettest-4715/pods/netserver-1/status\" latency=\"11.963894ms\" userAgent=\"kubelet/v1.21.0 (linux/amd64) kubernetes/c7e85d3\" srcIP=\"172.18.0.4:54138\" resp=200\nI0223 10:20:56.226210       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/api/v1/namespaces/kubectl-540/events\" latency=\"5.601253ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.226275       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1/namespaces/tables-2885/ingresses\" latency=\"7.146426ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.228705       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/networking.k8s.io/v1/namespaces/kubectl-540/networkpolicies\" latency=\"1.568467ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.232454       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/policy/v1beta1/namespaces/tables-2885/poddisruptionbudgets\" latency=\"5.43909ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.233654       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/networking.k8s.io/v1/namespaces/kubectl-540/networkpolicies\" latency=\"3.110053ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.234980       1 httplog.go:89] \"HTTP\" verb=\"GET\" URI=\"/apis/policy/v1beta1/namespaces/tables-2885/poddisruptionbudgets\" latency=\"1.758407ms\" userAgent=\"kube-controller-manager/v1.21.0 (linux/amd64) kubernetes/c7e85d3/system:serviceaccount:kube-system:namespace-controller\" srcIP=\"[fc00:f853:ccd:e793::2]:46018\" resp=200\nI0223 10:20:56.237115       1 httplog.go:89] \"HTTP\" verb=\"DELETE\" URI=\"/apis/discovery