This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1920 succeeded
Started2020-10-26 17:12
Elapsed9m39s
Revisionmaster

No Test Failures!


Show 1920 Passed Tests

Error lines from build-log.txt

+ bazel test --config=unit --config=remote --remote_instance_name=projects/k8s-prow-builds/instances/default_instance //... //hack:verify-all -- -//build/... -//vendor/...
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 6b631d59-e7b4-41a9-aec7-c5a4aaced3e4
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading:  (1 packages loaded)
Loading: 1 packages loaded
Loading: 150 packages loaded
    currently loading: staging/src/k8s.io/sample-apiserver/pkg/generated/openapi ... (4 packages)
Loading: 665 packages loaded
    currently loading: staging/src/k8s.io/client-go/kubernetes/typed/storage/v1alpha1 ... (3 packages)
... skipping 3 lines ...
    currently loading: vendor/k8s.io/client-go/kubernetes/typed/autoscaling/v2beta1 ... (4 packages)
Analyzing: 962 targets (4557 packages loaded, 0 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_toolchains/rules/rbe_repo.bzl:491:5: Bazel 2.2.0 is used in rbe_default.
Analyzing: 962 targets (4577 packages loaded, 18581 targets configured)
Analyzing: 962 targets (4577 packages loaded, 29136 targets configured)
Analyzing: 962 targets (4592 packages loaded, 32941 targets configured)
WARNING: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/remote_java_tools_linux/BUILD:671:1: in hdrs attribute of cc_library rule @remote_java_tools_linux//:combiners: Artifact 'external/remote_java_tools_linux/java_tools/src/tools/singlejar/zip_headers.h' is duplicated (through '@remote_java_tools_linux//:transient_bytes' and '@remote_java_tools_linux//:zip_headers'). Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
Analyzing: 962 targets (4599 packages loaded, 36871 targets configured)
Analyzing: 962 targets (4599 packages loaded, 36871 targets configured)
Analyzing: 962 targets (4599 packages loaded, 36871 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages complexnums (complexnums.go) and conversions (conversions.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages p (issue15920.go) and issue25301 (issue25301.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: go: finding module for package domain.name/importdecl
cannot find module providing package domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: exit status 1: go: finding module for package old.com/one
cannot find module providing package old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
gazelle: finding module path for import titanic.biz/bar: exit status 1: go: finding module for package titanic.biz/bar
cannot find module providing package titanic.biz/bar: module titanic.biz/bar: reading https://proxy.golang.org/titanic.biz/bar/@v/list: 410 Gone
	server response: not found: titanic.biz/bar@latest: unrecognized import path "titanic.biz/bar": parsing titanic.biz/bar: XML syntax error on line 1: expected attribute name in element
gazelle: finding module path for import titanic.biz/foo: exit status 1: go: finding module for package titanic.biz/foo
cannot find module providing package titanic.biz/foo: module titanic.biz/foo: reading https://proxy.golang.org/titanic.biz/foo/@v/list: 410 Gone
	server response: not found: titanic.biz/foo@latest: unrecognized import path "titanic.biz/foo": reading https://titanic.biz/foo?go-get=1: 403 Forbidden
gazelle: finding module path for import fruit.io/pear: exit status 1: go: finding module for package fruit.io/pear
cannot find module providing package fruit.io/pear: module fruit.io/pear: reading https://proxy.golang.org/fruit.io/pear/@v/list: 410 Gone
	server response: not found: fruit.io/pear@latest: unrecognized import path "fruit.io/pear": https fetch: Get "https://fruit.io/pear?go-get=1": x509: certificate is valid for *.gridserver.com, gridserver.com, not fruit.io
... skipping 97 lines ...
[14,617 / 17,934] 283 / 962 tests; Testing //staging/src/k8s.io/apiserver/pkg/util/shufflesharding:go_default_test [69s (2 actions)] ... (335 actions, 334 running)
[15,201 / 18,156] 348 / 962 tests; Listing all stable metrics. //test/instrumentation:list_stable_metrics; 78s remote ... (397 actions, 396 running)
[16,086 / 18,338] 458 / 962 tests; Testing //staging/src/k8s.io/client-go/rest:go_default_test [62s (2 actions)] ... (448 actions, 446 running)
[17,167 / 18,508] 535 / 962 tests; Testing //staging/src/k8s.io/client-go/rest:go_default_test (run 1 of 2); 102s remote ... (460 actions running)
[17,743 / 18,756] 593 / 962 tests; Testing //pkg/apis/batch/v1beta1:go_default_test [114s (2 actions)] ... (473 actions, 472 running)
[18,805 / 19,056] 805 / 962 tests; GoLink test/e2e/e2e.test_/e2e.test; 134s remote ... (242 actions, 241 running)
FAIL: //pkg/kubelet:go_default_test (run 2 of 2) (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_2_of_2/test.log)
INFO: From Testing //pkg/kubelet:go_default_test (run 2 of 2):
==================== Test output for //pkg/kubelet:go_default_test (run 2 of 2):
E1026 17:19:30.105671      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 17:19:30.106767      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 17:19:30.128598      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 17:19:30.135404      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 17:19:30.150042      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 17:19:30.151173      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 17:19:31.162805      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:34285/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 17:19:32.163756      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:34285/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1026 17:19:33.164811      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:34285/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 17:19:34.165795      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:34285/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 17:19:35.166845      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:34285/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
==================
WARNING: DATA RACE
Read at 0x00c0009c9543 by goroutine 97:
  testing.(*common).logDepth()
      GOROOT/src/testing/testing.go:736 +0xa9
  testing.(*common).log()
... skipping 40 lines ...
      vendor/github.com/cilium/ebpf/syscalls.go:188 +0x2b0
  runtime.doInit()
      GOROOT/src/runtime/proc.go:5625 +0x89
  k8s.io/kubernetes/vendor/github.com/cilium/ebpf/internal/btf.init()
      vendor/github.com/cilium/ebpf/internal/btf/btf.go:656 +0x18f
==================
E1026 17:19:35.173042      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 17:19:35.173547      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 17:19:35.188765      23 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 17:19:05.188503653 +0000 UTC m=-24.525789466 LastTransitionTime:2020-10-26 17:19:05.188503653 +0000 UTC m=-24.525789466 Reason:KubeletNotReady Message:container runtime is down}
E1026 17:19:35.196828      23 kubelet.go:2155] Container runtime sanity check failed: injected runtime status error
E1026 17:19:35.204868      23 kubelet.go:2159] Container runtime status is nil
E1026 17:19:35.212587      23 kubelet.go:2168] Container runtime network not ready: <nil>
E1026 17:19:35.212732      23 kubelet.go:2179] Container runtime not ready: <nil>
E1026 17:19:35.220336      23 kubelet.go:2179] Container runtime not ready: RuntimeReady=false reason: message:
E1026 17:19:35.236195      23 kubelet.go:2168] Container runtime network not ready: NetworkReady=false reason: message:
I1026 17:19:35.236651      23 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 17:19:35.196813323 +0000 UTC m=+5.482520192 LastTransitionTime:2020-10-26 17:19:35.196813323 +0000 UTC m=+5.482520192 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E1026 17:19:35.249497      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 17:19:35.250040      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 17:19:35.250403      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 17:19:35.250710      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 17:19:35.251101      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 17:19:35.258754      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 17:19:35.259499      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 17:19:35.287391      23 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
I1026 17:19:35.415575      23 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 17:19:35.416198      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 17:19:35.416271      23 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
I1026 17:19:35.419844      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 17:19:35.420983      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 17:19:35.421037      23 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
I1026 17:19:35.424490      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 17:19:35.424540      23 kubelet_node_status.go:249] Controller attach-detach setting changed to true; updating existing Node
E1026 17:19:35.427430      23 kubelet_node_status.go:92] Unable to register node "127.0.0.1" with API server: 
E1026 17:19:35.428357      23 kubelet_node_status.go:98] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I1026 17:19:35.429346      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 17:19:35.429396      23 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
E1026 17:19:35.430324      23 kubelet_node_status.go:119] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
E1026 17:19:35.444539      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 17:19:35.445534      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 17:19:35.457421      23 kubelet_node_status.go:167] Removing now unsupported huge page resource named: hugepages-2Mi
I1026 17:19:35.460697      23 kubelet_node_status.go:181] Zero out resource test.com/resource1 capacity in existing node.
I1026 17:19:35.460890      23 kubelet_node_status.go:181] Zero out resource test.com/resource2 capacity in existing node.
I1026 17:19:35.564478      23 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 17:19:35.564636      23 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
... skipping 12 lines ...
I1026 17:19:35.730987      23 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 17:19:35.731421      23 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 17:19:35.731847      23 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E1026 17:19:35.733514      23 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
I1026 17:19:35.733577      23 kubelet.go:1803] Starting kubelet main sync loop.
E1026 17:19:35.733633      23 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
W1026 17:19:35.749381      23 predicate.go:79] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E1026 17:19:35.752856      23 runtime.go:209] invalid container ID: ""
E1026 17:19:35.752982      23 runtime.go:209] invalid container ID: ""
I1026 17:19:35.760563      23 kubelet.go:1621] Trying to delete pod foo_ns 11111111
W1026 17:19:35.760671      23 kubelet.go:1625] Deleted mirror pod "foo_ns(11111111)" because it is outdated
E1026 17:19:35.812171      23 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W1026 17:19:35.814877      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.414441718/pods/pod1uid/volumes" does not exist
... skipping 4 lines ...
E1026 17:19:35.817549      23 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W1026 17:19:35.822006      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.918643338/pods/pod1uid/volumes" does not exist
W1026 17:19:35.822151      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.918643338/pods/pod1uid/volumes" does not exist
W1026 17:19:35.826340      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.874676812/pods/poduid/volumes" does not exist
I1026 17:19:35.839542      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 17:19:35.840069      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 17:19:35.843634      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 17:19:36.040859      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I1026 17:19:36.041066      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I1026 17:19:36.041153      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I1026 17:19:36.041226      23 reconciler.go:157] Reconciler: start to sync state
I1026 17:19:36.041559      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I1026 17:19:36.142200      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 2 lines ...
I1026 17:19:36.142496      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I1026 17:19:36.142554      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 17:19:36.142648      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I1026 17:19:36.440183      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 17:19:36.447162      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 17:19:36.450744      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 17:19:36.452025      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 17:19:36.651279      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I1026 17:19:36.651714      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I1026 17:19:36.652073      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I1026 17:19:36.652732      23 reconciler.go:157] Reconciler: start to sync state
I1026 17:19:36.652478      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I1026 17:19:36.652612      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
... skipping 7 lines ...
I1026 17:19:36.756856      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") DevicePath "/dev/vdb-test"
I1026 17:19:36.757057      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") DevicePath "/dev/sdb"
I1026 17:19:36.757358      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") device mount path ""
I1026 17:19:37.047817      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 17:19:37.050872      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 17:19:37.051025      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 17:19:37.053806      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 17:19:37.252403      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 17:19:37.252539      23 reconciler.go:157] Reconciler: start to sync state
I1026 17:19:37.252646      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 17:19:37.353459      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 17:19:37.353648      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 17:19:37.353748      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 17:19:37.652005      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 17:19:37.656183      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 17:19:37.656284      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 17:19:37.659016      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 17:19:37.857314      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 17:19:37.857632      23 reconciler.go:157] Reconciler: start to sync state
I1026 17:19:37.857418      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 17:19:37.958284      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 17:19:37.958405      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 17:19:37.958665      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 17:19:38.359198      23 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I1026 17:19:38.359379      23 operation_generator.go:786] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I1026 17:19:38.459760      23 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 17:19:38.459949      23 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I1026 17:19:38.560268      23 reconciler.go:333] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 17:19:38.560341      23 operation_generator.go:470] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
E1026 17:19:38.585392      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 17:19:38.657816      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 17:19:38.660480      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 17:19:38.660631      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 17:19:38.663125      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 17:19:38.861523      23 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 17:19:38.861627      23 reconciler.go:157] Reconciler: start to sync state
I1026 17:19:38.861803      23 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 17:19:38.962327      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 17:19:38.962454      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 17:19:38.962541      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 17:19:39.261144      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 17:19:39.263602      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 17:19:39.263724      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 17:19:39.266376      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 17:19:39.464681      23 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 17:19:39.464833      23 reconciler.go:157] Reconciler: start to sync state
I1026 17:19:39.465005      23 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 17:19:39.565706      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 17:19:39.565900      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 17:19:39.566004      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 17:19:39.966235      23 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I1026 17:19:39.966582      23 operation_generator.go:786] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I1026 17:19:40.066788      23 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 17:19:40.067130      23 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
E1026 17:19:40.106510      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 17:19:40.136016      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 17:19:40.151878      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 17:19:40.167201      23 reconciler.go:319] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I1026 17:19:40.265178      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
W1026 17:19:40.267111      23 pod_container_deletor.go:79] Container "abc" not found in pod's containers
I1026 17:19:40.440879      23 runonce.go:88] Waiting for 1 pods
I1026 17:19:40.440955      23 runonce.go:123] pod "foo_new(12345678)" containers running
I1026 17:19:40.441185      23 runonce.go:102] started pod "foo_new(12345678)"
I1026 17:19:40.441239      23 runonce.go:108] 1 pods started
FAIL
================================================================================
[19,180 / 19,194] 955 / 962 tests, 1 failed; Testing //cmd/kubeadm/app/phases/upgrade:go_default_test [115s (2 actions)] ... (14 actions running)
[19,192 / 19,194] 961 / 962 tests, 1 failed; Testing //pkg/controlplane:go_default_test [100s (2 actions)] ... (2 actions running)
INFO: Elapsed time: 575.384s, Critical Path: 467.93s
INFO: 17269 processes: 15341 remote cache hit, 1928 remote.
INFO: Build completed, 1 test FAILED, 19194 total actions
//cluster:common_test                                                    PASSED in 15.4s
  Stats over 2 runs: max = 15.4s, min = 15.1s, avg = 15.3s, dev = 0.2s
//cluster:kube-util_test                                                 PASSED in 16.7s
  Stats over 2 runs: max = 16.7s, min = 16.7s, avg = 16.7s, dev = 0.0s
//cluster/gce/cos:go_default_test                                        PASSED in 26.3s
  Stats over 2 runs: max = 26.3s, min = 26.3s, avg = 26.3s, dev = 0.0s
... skipping 1910 lines ...
//third_party/forked/golang/expansion:go_default_test                    PASSED in 13.2s
  Stats over 2 runs: max = 13.2s, min = 12.5s, avg = 12.8s, dev = 0.3s
//third_party/forked/golang/reflect:go_default_test                      PASSED in 5.3s
  Stats over 2 runs: max = 5.3s, min = 5.2s, avg = 5.2s, dev = 0.0s
//third_party/forked/gonum/graph/simple:go_default_test                  PASSED in 5.7s
  Stats over 2 runs: max = 5.7s, min = 5.2s, avg = 5.5s, dev = 0.3s
//pkg/kubelet:go_default_test                                            FAILED in 1 out of 2 in 21.7s
  Stats over 2 runs: max = 21.7s, min = 21.3s, avg = 21.5s, dev = 0.2s
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_2_of_2/test.log

Executed 962 out of 962 tests: 961 tests pass and 1 fails remotely.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
INFO: Build completed, 1 test FAILED, 19194 total actions
+ ../test-infra/hack/coalesce.py
+ exit 3