This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1920 succeeded
Started2020-10-26 18:13
Elapsed11m5s
Revisionmaster

No Test Failures!


Show 1920 Passed Tests

Error lines from build-log.txt

+ bazel test --config=unit --config=remote --remote_instance_name=projects/k8s-prow-builds/instances/default_instance //... //hack:verify-all -- -//build/... -//vendor/...
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: a35063d8-c746-44b0-a39d-1e804e307f5f
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading:  (1 packages loaded)
Loading: 1 packages loaded
Loading: 83 packages loaded
Loading: 662 packages loaded
    currently loading: test/soak/serve_hostnames
Loading: 1576 packages loaded
... skipping 5 lines ...
Analyzing: 962 targets (4557 packages loaded, 0 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_toolchains/rules/rbe_repo.bzl:491:5: Bazel 2.2.0 is used in rbe_default.
Analyzing: 962 targets (4577 packages loaded, 9258 targets configured)
Analyzing: 962 targets (4577 packages loaded, 29136 targets configured)
Analyzing: 962 targets (4577 packages loaded, 29136 targets configured)
Analyzing: 962 targets (4597 packages loaded, 36727 targets configured)
WARNING: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/remote_java_tools_linux/BUILD:671:1: in hdrs attribute of cc_library rule @remote_java_tools_linux//:combiners: Artifact 'external/remote_java_tools_linux/java_tools/src/tools/singlejar/zip_headers.h' is duplicated (through '@remote_java_tools_linux//:transient_bytes' and '@remote_java_tools_linux//:zip_headers'). Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
Analyzing: 962 targets (4599 packages loaded, 36871 targets configured)
Analyzing: 962 targets (4599 packages loaded, 36871 targets configured)
Analyzing: 962 targets (4599 packages loaded, 36871 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages issue30628 (issue30628.go) and issue34182 (issue34182.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: go: finding module for package domain.name/importdecl
cannot find module providing package domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: exit status 1: go: finding module for package old.com/one
cannot find module providing package old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
gazelle: finding module path for import titanic.biz/bar: exit status 1: go: finding module for package titanic.biz/bar
cannot find module providing package titanic.biz/bar: module titanic.biz/bar: reading https://proxy.golang.org/titanic.biz/bar/@v/list: 410 Gone
	server response: not found: titanic.biz/bar@latest: unrecognized import path "titanic.biz/bar": parsing titanic.biz/bar: XML syntax error on line 1: expected attribute name in element
gazelle: finding module path for import titanic.biz/foo: exit status 1: go: finding module for package titanic.biz/foo
cannot find module providing package titanic.biz/foo: module titanic.biz/foo: reading https://proxy.golang.org/titanic.biz/foo/@v/list: 410 Gone
	server response: not found: titanic.biz/foo@latest: unrecognized import path "titanic.biz/foo": parsing titanic.biz/foo: XML syntax error on line 1: expected attribute name in element
gazelle: finding module path for import fruit.io/pear: exit status 1: go: finding module for package fruit.io/pear
cannot find module providing package fruit.io/pear: module fruit.io/pear: reading https://proxy.golang.org/fruit.io/pear/@v/list: 410 Gone
	server response: not found: fruit.io/pear@latest: unrecognized import path "fruit.io/pear": https fetch: Get "https://fruit.io/pear?go-get=1": x509: certificate is valid for *.gridserver.com, gridserver.com, not fruit.io
gazelle: finding module path for import fruit.io/banana: exit status 1: go: finding module for package fruit.io/banana
cannot find module providing package fruit.io/banana: module fruit.io/banana: reading https://proxy.golang.org/fruit.io/banana/@v/list: 410 Gone
	server response: not found: fruit.io/banana@latest: unrecognized import path "fruit.io/banana": https fetch: Get "https://fruit.io/banana?go-get=1": x509: certificate is valid for *.gridserver.com, gridserver.com, not fruit.io
... skipping 95 lines ...
[15,329 / 18,082] 370 / 962 tests; Listing all stable metrics. //test/instrumentation:list_stable_metrics; 46s remote ... (303 actions, 302 running)
[16,052 / 18,286] 452 / 962 tests; Listing all stable metrics. //test/instrumentation:list_stable_metrics; 87s remote ... (374 actions, 373 running)
[17,146 / 18,464] 560 / 962 tests; Listing all stable metrics. //test/instrumentation:list_stable_metrics; 135s remote ... (391 actions, 390 running)
[17,786 / 18,690] 621 / 962 tests; GoLink pkg/cloudprovider/providers/go_default_test_/go_default_test; 72s remote ... (386 actions, 384 running)
[18,455 / 18,954] 778 / 962 tests; Testing //cmd/kubeadm/app/phases/certs:go_default_test [107s (2 actions)] ... (254 actions, 252 running)
[19,158 / 19,184] 947 / 962 tests; Testing //pkg/controller/nodeipam:go_default_test [102s (2 actions)] ... (24 actions running)
FAIL: //pkg/kubelet:go_default_test (run 1 of 2) (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_1_of_2/test.log)
INFO: From Testing //pkg/kubelet:go_default_test (run 1 of 2):
==================== Test output for //pkg/kubelet:go_default_test (run 1 of 2):
E1026 18:22:39.525073      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 18:22:39.525966      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 18:22:39.544950      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 18:22:39.545665      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 18:22:39.554792      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 18:22:39.555767      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 18:22:40.573003      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:39803/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 18:22:41.573938      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:39803/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1026 18:22:42.574822      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:39803/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 18:22:43.575780      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:39803/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1026 18:22:44.576713      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:39803/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
==================
WARNING: DATA RACE
Read at 0x00c000450f43 by goroutine 97:
  testing.(*common).logDepth()
      GOROOT/src/testing/testing.go:736 +0xa9
  testing.(*common).log()
... skipping 40 lines ...
      vendor/github.com/cilium/ebpf/syscalls.go:188 +0x2b0
  runtime.doInit()
      GOROOT/src/runtime/proc.go:5625 +0x89
  k8s.io/kubernetes/vendor/github.com/cilium/ebpf/internal/btf.init()
      vendor/github.com/cilium/ebpf/internal/btf/btf.go:656 +0x18f
==================
E1026 18:22:44.580385      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 18:22:44.580993      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 18:22:44.602546      23 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 18:22:14.602345069 +0000 UTC m=-24.568399761 LastTransitionTime:2020-10-26 18:22:14.602345069 +0000 UTC m=-24.568399761 Reason:KubeletNotReady Message:container runtime is down}
E1026 18:22:44.609461      23 kubelet.go:2155] Container runtime sanity check failed: injected runtime status error
E1026 18:22:44.616538      23 kubelet.go:2159] Container runtime status is nil
E1026 18:22:44.623211      23 kubelet.go:2168] Container runtime network not ready: <nil>
E1026 18:22:44.623372      23 kubelet.go:2179] Container runtime not ready: <nil>
E1026 18:22:44.630531      23 kubelet.go:2179] Container runtime not ready: RuntimeReady=false reason: message:
E1026 18:22:44.645250      23 kubelet.go:2168] Container runtime network not ready: NetworkReady=false reason: message:
I1026 18:22:44.645751      23 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 18:22:44.609445188 +0000 UTC m=+5.438700367 LastTransitionTime:2020-10-26 18:22:44.609445188 +0000 UTC m=+5.438700367 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E1026 18:22:44.654849      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 18:22:44.654953      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 18:22:44.655093      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 18:22:44.655237      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 18:22:44.655371      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 18:22:44.663800      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 18:22:44.664482      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 18:22:44.690454      23 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
I1026 18:22:44.809480      23 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 18:22:44.809924      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 18:22:44.810048      23 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
I1026 18:22:44.813774      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 18:22:44.816278      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 18:22:44.816652      23 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
I1026 18:22:44.823219      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 18:22:44.823524      23 kubelet_node_status.go:249] Controller attach-detach setting changed to true; updating existing Node
E1026 18:22:44.827310      23 kubelet_node_status.go:92] Unable to register node "127.0.0.1" with API server: 
E1026 18:22:44.828409      23 kubelet_node_status.go:98] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I1026 18:22:44.829512      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 18:22:44.829556      23 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
E1026 18:22:44.830377      23 kubelet_node_status.go:119] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
E1026 18:22:44.835385      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 18:22:44.836294      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 18:22:44.847687      23 kubelet_node_status.go:167] Removing now unsupported huge page resource named: hugepages-2Mi
I1026 18:22:44.850002      23 kubelet_node_status.go:181] Zero out resource test.com/resource2 capacity in existing node.
I1026 18:22:44.850145      23 kubelet_node_status.go:181] Zero out resource test.com/resource1 capacity in existing node.
I1026 18:22:44.953631      23 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 18:22:44.953791      23 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
... skipping 12 lines ...
I1026 18:22:45.113105      23 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 18:22:45.113553      23 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 18:22:45.114217      23 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E1026 18:22:45.115874      23 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
I1026 18:22:45.115933      23 kubelet.go:1803] Starting kubelet main sync loop.
E1026 18:22:45.115987      23 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
W1026 18:22:45.133130      23 predicate.go:79] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E1026 18:22:45.140981      23 runtime.go:209] invalid container ID: ""
E1026 18:22:45.141100      23 runtime.go:209] invalid container ID: ""
I1026 18:22:45.146777      23 kubelet.go:1621] Trying to delete pod foo_ns 11111111
W1026 18:22:45.146861      23 kubelet.go:1625] Deleted mirror pod "foo_ns(11111111)" because it is outdated
W1026 18:22:45.187295      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.776826668/pods/pod1uid/volumes" does not exist
W1026 18:22:45.187409      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.776826668/pods/pod1uid/volumes" does not exist
... skipping 4 lines ...
W1026 18:22:45.194370      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.303303456/pods/pod1uid/volumes" does not exist
W1026 18:22:45.194470      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.303303456/pods/pod1uid/volumes" does not exist
E1026 18:22:45.198592      23 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W1026 18:22:45.206519      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.110941862/pods/poduid/volumes" does not exist
I1026 18:22:45.221860      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 18:22:45.222404      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 18:22:45.225381      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 18:22:45.423185      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I1026 18:22:45.423640      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I1026 18:22:45.423798      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I1026 18:22:45.424445      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I1026 18:22:45.424584      23 reconciler.go:157] Reconciler: start to sync state
I1026 18:22:45.525937      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 2 lines ...
I1026 18:22:45.526842      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 18:22:45.527194      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 18:22:45.527575      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I1026 18:22:45.822675      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 18:22:45.825191      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
I1026 18:22:45.825162      23 volume_manager.go:266] Starting Kubelet Volume Manager
E1026 18:22:45.828078      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 18:22:46.026585      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I1026 18:22:46.027099      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I1026 18:22:46.027141      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I1026 18:22:46.028179      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
I1026 18:22:46.028685      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I1026 18:22:46.028860      23 reconciler.go:157] Reconciler: start to sync state
... skipping 7 lines ...
I1026 18:22:46.130402      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") DevicePath "/dev/sdb"
I1026 18:22:46.132411      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") device mount path ""
I1026 18:22:46.132693      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") device mount path ""
I1026 18:22:46.426025      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 18:22:46.428497      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 18:22:46.428668      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 18:22:46.431538      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 18:22:46.529991      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 18:22:46.530084      23 reconciler.go:157] Reconciler: start to sync state
I1026 18:22:46.530521      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 18:22:46.630812      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 18:22:46.631052      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 18:22:46.631308      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 18:22:46.729075      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 18:22:46.731394      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 18:22:46.731652      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 18:22:46.734349      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 18:22:46.932391      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 18:22:46.932486      23 reconciler.go:157] Reconciler: start to sync state
I1026 18:22:46.932903      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 18:22:47.033518      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 18:22:47.034025      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 18:22:47.034121      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 18:22:47.434577      23 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I1026 18:22:47.434747      23 operation_generator.go:786] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I1026 18:22:47.535116      23 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 18:22:47.535199      23 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I1026 18:22:47.635732      23 reconciler.go:333] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 18:22:47.635884      23 operation_generator.go:470] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
E1026 18:22:47.660578      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 18:22:47.732826      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 18:22:47.735044      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 18:22:47.735265      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 18:22:47.737896      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 18:22:47.836712      23 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 18:22:47.836823      23 reconciler.go:157] Reconciler: start to sync state
E1026 18:22:47.837169      23 nestedpendingoperations.go:301] Operation for "{volumeName:fake/fake-device podName: nodeName:}" failed. No retries permitted until 2020-10-26 18:22:48.336951023 +0000 UTC m=+9.166206198 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"vol1\" (UniqueName: \"fake/fake-device\") pod \"foo\" (UID: \"12345678\") "
I1026 18:22:48.338592      23 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 18:22:48.339115      23 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 18:22:48.439265      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 18:22:48.439391      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 18:22:48.439476      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 18:22:48.635736      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 18:22:48.638466      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 18:22:48.638557      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 18:22:48.641140      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 18:22:48.839398      23 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 18:22:48.839495      23 reconciler.go:157] Reconciler: start to sync state
I1026 18:22:48.840062      23 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 18:22:48.940161      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 18:22:48.940294      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 18:22:48.940452      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 18:22:49.340950      23 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I1026 18:22:49.341172      23 operation_generator.go:786] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I1026 18:22:49.441487      23 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 18:22:49.441588      23 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
E1026 18:22:49.525733      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 18:22:49.541847      23 reconciler.go:319] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
E1026 18:22:49.545973      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 18:22:49.555325      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 18:22:49.640592      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
W1026 18:22:49.644627      23 pod_container_deletor.go:79] Container "abc" not found in pod's containers
I1026 18:22:49.811271      23 runonce.go:88] Waiting for 1 pods
I1026 18:22:49.811412      23 runonce.go:123] pod "foo_new(12345678)" containers running
I1026 18:22:49.811936      23 runonce.go:102] started pod "foo_new(12345678)"
I1026 18:22:49.812023      23 runonce.go:108] 1 pods started
FAIL
================================================================================
[19,192 / 19,194] 961 / 962 tests, 1 failed; Testing //pkg/kubelet/volumemanager/reconciler:go_default_test [115s (2 actions)] ... (2 actions running)
INFO: Elapsed time: 661.516s, Critical Path: 558.83s
INFO: 17269 processes: 11553 remote cache hit, 5716 remote.
INFO: Build completed, 1 test FAILED, 19194 total actions
//cluster:common_test                                                    PASSED in 8.0s
  Stats over 2 runs: max = 8.0s, min = 7.9s, avg = 7.9s, dev = 0.0s
//cluster:kube-util_test                                                 PASSED in 8.1s
  Stats over 2 runs: max = 8.1s, min = 8.1s, avg = 8.1s, dev = 0.0s
//cluster/gce/cos:go_default_test                                        PASSED in 17.7s
  Stats over 2 runs: max = 17.7s, min = 17.0s, avg = 17.3s, dev = 0.3s
... skipping 1910 lines ...
//third_party/forked/golang/expansion:go_default_test                    PASSED in 7.2s
  Stats over 2 runs: max = 7.2s, min = 7.2s, avg = 7.2s, dev = 0.0s
//third_party/forked/golang/reflect:go_default_test                      PASSED in 6.8s
  Stats over 2 runs: max = 6.8s, min = 6.7s, avg = 6.7s, dev = 0.0s
//third_party/forked/gonum/graph/simple:go_default_test                  PASSED in 7.1s
  Stats over 2 runs: max = 7.1s, min = 6.8s, avg = 7.0s, dev = 0.2s
//pkg/kubelet:go_default_test                                            FAILED in 1 out of 2 in 16.1s
  Stats over 2 runs: max = 16.1s, min = 15.8s, avg = 16.0s, dev = 0.2s
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_1_of_2/test.log

Executed 962 out of 962 tests: 961 tests pass and 1 fails remotely.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
INFO: Build completed, 1 test FAILED, 19194 total actions
+ ../test-infra/hack/coalesce.py
+ exit 3