This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1920 succeeded
Started2020-10-26 23:18
Elapsed9m5s
Revisionmaster

No Test Failures!


Show 1920 Passed Tests

Error lines from build-log.txt

+ bazel test --config=unit --config=remote --remote_instance_name=projects/k8s-prow-builds/instances/default_instance //... //hack:verify-all -- -//build/... -//vendor/...
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 27146792-0dcc-411f-965b-19e0a760bdd5
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading:  (1 packages loaded)
Loading: 1 packages loaded
Loading: 13 packages loaded
    currently loading: build/release-tars ... (565 packages)
Loading: 950 packages loaded
    currently loading: build/release-tars ... (11 packages)
... skipping 6 lines ...
Analyzing: 962 targets (4558 packages loaded, 0 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_toolchains/rules/rbe_repo.bzl:491:5: Bazel 2.2.0 is used in rbe_default.
Analyzing: 962 targets (4578 packages loaded, 11601 targets configured)
Analyzing: 962 targets (4578 packages loaded, 29145 targets configured)
Analyzing: 962 targets (4594 packages loaded, 30114 targets configured)
Analyzing: 962 targets (4598 packages loaded, 36736 targets configured)
WARNING: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/remote_java_tools_linux/BUILD:671:1: in hdrs attribute of cc_library rule @remote_java_tools_linux//:combiners: Artifact 'external/remote_java_tools_linux/java_tools/src/tools/singlejar/zip_headers.h' is duplicated (through '@remote_java_tools_linux//:transient_bytes' and '@remote_java_tools_linux//:zip_headers'). Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
Analyzing: 962 targets (4600 packages loaded, 36880 targets configured)
Analyzing: 962 targets (4600 packages loaded, 36880 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages escapeinfo (escapeinfo.go) and complexnums (complexnums.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: go: finding module for package domain.name/importdecl
cannot find module providing package domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: exit status 1: go: finding module for package old.com/one
cannot find module providing package old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
gazelle: finding module path for import titanic.biz/bar: exit status 1: go: finding module for package titanic.biz/bar
cannot find module providing package titanic.biz/bar: module titanic.biz/bar: reading https://proxy.golang.org/titanic.biz/bar/@v/list: 410 Gone
	server response: not found: titanic.biz/bar@latest: unrecognized import path "titanic.biz/bar": parsing titanic.biz/bar: XML syntax error on line 1: expected attribute name in element
gazelle: finding module path for import titanic.biz/foo: exit status 1: go: finding module for package titanic.biz/foo
cannot find module providing package titanic.biz/foo: module titanic.biz/foo: reading https://proxy.golang.org/titanic.biz/foo/@v/list: 410 Gone
	server response: not found: titanic.biz/foo@latest: unrecognized import path "titanic.biz/foo": parsing titanic.biz/foo: XML syntax error on line 1: expected attribute name in element
gazelle: finding module path for import fruit.io/pear: exit status 1: go: finding module for package fruit.io/pear
cannot find module providing package fruit.io/pear: module fruit.io/pear: reading https://proxy.golang.org/fruit.io/pear/@v/list: 410 Gone
	server response: not found: fruit.io/pear@latest: unrecognized import path "fruit.io/pear": https fetch: Get "https://fruit.io/pear?go-get=1": x509: certificate is valid for *.gridserver.com, gridserver.com, not fruit.io
gazelle: finding module path for import fruit.io/banana: exit status 1: go: finding module for package fruit.io/banana
cannot find module providing package fruit.io/banana: module fruit.io/banana: reading https://proxy.golang.org/fruit.io/banana/@v/list: 410 Gone
	server response: not found: fruit.io/banana@latest: unrecognized import path "fruit.io/banana": https fetch: Get "https://fruit.io/banana?go-get=1": x509: certificate is valid for *.gridserver.com, gridserver.com, not fruit.io
... skipping 96 lines ...
[15,455 / 18,160] 412 / 962 tests; Testing //staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/config:go_default_test [51s (2 actions)] ... (234 actions, 233 running)
[16,098 / 18,366] 491 / 962 tests; Testing //staging/src/k8s.io/apiserver/pkg/admission/plugin/webhook/config:go_default_test [83s (2 actions)] ... (242 actions, 241 running)
[17,543 / 18,570] 596 / 962 tests; Testing //staging/src/k8s.io/client-go/rest:go_default_test [78s (2 actions)] ... (401 actions, 400 running)
[18,140 / 18,722] 692 / 962 tests; Testing //staging/src/k8s.io/client-go/rest:go_default_test [120s (2 actions)] ... (300 actions running)
[18,582 / 18,812] 758 / 962 tests; GoLink staging/src/k8s.io/kubectl/pkg/cmd/kustomize/go_default_test_/go_default_test; 132s remote ... (218 actions running)
[18,964 / 19,076] 876 / 962 tests; GoLink pkg/volume/util/operationexecutor/go_default_test_/go_default_test; 136s remote ... (109 actions, 108 running)
FAIL: //pkg/kubelet:go_default_test (run 2 of 2) (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_2_of_2/test.log)
INFO: From Testing //pkg/kubelet:go_default_test (run 2 of 2):
==================== Test output for //pkg/kubelet:go_default_test (run 2 of 2):
E1026 23:25:56.382624      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 23:25:56.384085      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 23:25:56.409012      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 23:25:56.409893      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 23:25:56.417461      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 23:25:56.419482      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 23:25:57.443225      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:44155/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 23:25:58.444250      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:44155/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 23:25:59.445426      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:44155/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1026 23:26:00.446621      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:44155/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 23:26:01.447802      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:44155/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
==================
WARNING: DATA RACE
Read at 0x00c0000a6f43 by goroutine 97:
  testing.(*common).logDepth()
      GOROOT/src/testing/testing.go:736 +0xa9
  testing.(*common).log()
... skipping 40 lines ...
      vendor/github.com/cilium/ebpf/syscalls.go:188 +0x2b0
  runtime.doInit()
      GOROOT/src/runtime/proc.go:5625 +0x89
  k8s.io/kubernetes/vendor/github.com/cilium/ebpf/internal/btf.init()
      vendor/github.com/cilium/ebpf/internal/btf/btf.go:656 +0x18f
==================
E1026 23:26:01.454316      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 23:26:01.455023      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 23:26:01.484354      24 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 23:25:31.484108314 +0000 UTC m=-24.433414882 LastTransitionTime:2020-10-26 23:25:31.484108314 +0000 UTC m=-24.433414882 Reason:KubeletNotReady Message:container runtime is down}
E1026 23:26:01.493314      24 kubelet.go:2155] Container runtime sanity check failed: injected runtime status error
E1026 23:26:01.502829      24 kubelet.go:2159] Container runtime status is nil
E1026 23:26:01.512011      24 kubelet.go:2168] Container runtime network not ready: <nil>
E1026 23:26:01.512221      24 kubelet.go:2179] Container runtime not ready: <nil>
E1026 23:26:01.521751      24 kubelet.go:2179] Container runtime not ready: RuntimeReady=false reason: message:
E1026 23:26:01.540298      24 kubelet.go:2168] Container runtime network not ready: NetworkReady=false reason: message:
I1026 23:26:01.540968      24 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 23:26:01.493294682 +0000 UTC m=+5.575771462 LastTransitionTime:2020-10-26 23:26:01.493294682 +0000 UTC m=+5.575771462 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E1026 23:26:01.552885      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 23:26:01.553010      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 23:26:01.553151      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 23:26:01.553224      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 23:26:01.553407      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 23:26:01.566926      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 23:26:01.568467      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 23:26:01.598343      24 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
I1026 23:26:01.736885      24 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 23:26:01.737615      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 23:26:01.737747      24 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
I1026 23:26:01.748968      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 23:26:01.753528      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 23:26:01.754177      24 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
I1026 23:26:01.761909      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 23:26:01.762484      24 kubelet_node_status.go:249] Controller attach-detach setting changed to true; updating existing Node
E1026 23:26:01.766901      24 kubelet_node_status.go:92] Unable to register node "127.0.0.1" with API server: 
E1026 23:26:01.768882      24 kubelet_node_status.go:98] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I1026 23:26:01.770741      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 23:26:01.770805      24 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
E1026 23:26:01.771910      24 kubelet_node_status.go:119] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
E1026 23:26:01.778936      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 23:26:01.781784      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 23:26:01.796745      24 kubelet_node_status.go:167] Removing now unsupported huge page resource named: hugepages-2Mi
I1026 23:26:01.801160      24 kubelet_node_status.go:181] Zero out resource test.com/resource1 capacity in existing node.
I1026 23:26:01.801612      24 kubelet_node_status.go:181] Zero out resource test.com/resource2 capacity in existing node.
I1026 23:26:01.907165      24 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 23:26:01.907513      24 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
... skipping 12 lines ...
I1026 23:26:02.139727      24 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 23:26:02.140499      24 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 23:26:02.141071      24 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E1026 23:26:02.143353      24 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
I1026 23:26:02.143494      24 kubelet.go:1803] Starting kubelet main sync loop.
E1026 23:26:02.143597      24 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
W1026 23:26:02.176332      24 predicate.go:79] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E1026 23:26:02.181874      24 runtime.go:209] invalid container ID: ""
E1026 23:26:02.182040      24 runtime.go:209] invalid container ID: ""
I1026 23:26:02.189749      24 kubelet.go:1621] Trying to delete pod foo_ns 11111111
W1026 23:26:02.189873      24 kubelet.go:1625] Deleted mirror pod "foo_ns(11111111)" because it is outdated
E1026 23:26:02.259534      24 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W1026 23:26:02.263126      24 kubelet_getters.go:300] Path "/tmp/kubelet_test.666905481/pods/pod1uid/volumes" does not exist
... skipping 4 lines ...
E1026 23:26:02.266627      24 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W1026 23:26:02.273872      24 kubelet_getters.go:300] Path "/tmp/kubelet_test.435163341/pods/pod1uid/volumes" does not exist
W1026 23:26:02.274534      24 kubelet_getters.go:300] Path "/tmp/kubelet_test.435163341/pods/pod1uid/volumes" does not exist
W1026 23:26:02.301299      24 kubelet_getters.go:300] Path "/tmp/kubelet_test.059525653/pods/poduid/volumes" does not exist
I1026 23:26:02.303996      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 23:26:02.304365      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 23:26:02.309437      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 23:26:02.507035      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I1026 23:26:02.507533      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I1026 23:26:02.508344      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I1026 23:26:02.508656      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I1026 23:26:02.509007      24 reconciler.go:157] Reconciler: start to sync state
I1026 23:26:02.610778      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 2 lines ...
I1026 23:26:02.612039      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 23:26:02.612499      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 23:26:02.612683      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I1026 23:26:02.905107      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 23:26:02.907604      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 23:26:02.907613      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 23:26:02.911170      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 23:26:03.108862      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I1026 23:26:03.109076      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I1026 23:26:03.109151      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I1026 23:26:03.109468      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I1026 23:26:03.109501      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I1026 23:26:03.109531      24 reconciler.go:157] Reconciler: start to sync state
... skipping 7 lines ...
I1026 23:26:03.212428      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/vdb-test"
I1026 23:26:03.212559      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") DevicePath "/dev/sdb"
I1026 23:26:03.212649      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") device mount path ""
I1026 23:26:03.508630      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 23:26:03.512268      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
I1026 23:26:03.512244      24 volume_manager.go:266] Starting Kubelet Volume Manager
E1026 23:26:03.514833      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 23:26:03.713459      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 23:26:03.713567      24 reconciler.go:157] Reconciler: start to sync state
I1026 23:26:03.713626      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 23:26:03.814381      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 23:26:03.814530      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 23:26:03.814651      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 23:26:04.113004      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 23:26:04.115454      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 23:26:04.115522      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 23:26:04.119264      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 23:26:04.317063      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 23:26:04.317226      24 reconciler.go:157] Reconciler: start to sync state
I1026 23:26:04.317245      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 23:26:04.418066      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 23:26:04.418202      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 23:26:04.418291      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 23:26:04.819232      24 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I1026 23:26:04.819566      24 operation_generator.go:786] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I1026 23:26:04.920000      24 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 23:26:04.920222      24 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I1026 23:26:05.020651      24 reconciler.go:333] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 23:26:05.020772      24 operation_generator.go:470] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
E1026 23:26:05.045993      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 23:26:05.117524      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 23:26:05.121041      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 23:26:05.121197      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 23:26:05.124230      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 23:26:05.322472      24 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 23:26:05.322600      24 reconciler.go:157] Reconciler: start to sync state
I1026 23:26:05.323192      24 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 23:26:05.423619      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 23:26:05.423783      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 23:26:05.423869      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 23:26:05.721955      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 23:26:05.725812      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 23:26:05.725981      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 23:26:05.728646      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 23:26:05.927082      24 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 23:26:05.927251      24 reconciler.go:157] Reconciler: start to sync state
I1026 23:26:05.927529      24 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 23:26:06.028421      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 23:26:06.028605      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 23:26:06.028709      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 23:26:06.329145      24 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I1026 23:26:06.329488      24 operation_generator.go:786] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
E1026 23:26:06.384724      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 23:26:06.410804      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 23:26:06.419065      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 23:26:06.429954      24 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 23:26:06.430094      24 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I1026 23:26:06.530722      24 reconciler.go:319] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I1026 23:26:06.577606      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
W1026 23:26:06.579715      24 pod_container_deletor.go:79] Container "abc" not found in pod's containers
I1026 23:26:06.766916      24 runonce.go:88] Waiting for 1 pods
I1026 23:26:06.767249      24 runonce.go:123] pod "foo_new(12345678)" containers running
I1026 23:26:06.767793      24 runonce.go:102] started pod "foo_new(12345678)"
I1026 23:26:06.767893      24 runonce.go:108] 1 pods started
FAIL
================================================================================
[19,182 / 19,192] 956 / 962 tests, 1 failed; Testing //pkg/kubelet/volumemanager/reconciler:go_default_test [90s (2 actions)] ... (10 actions running)
INFO: Elapsed time: 541.109s, Critical Path: 465.09s
INFO: 17269 processes: 15342 remote cache hit, 1927 remote.
INFO: Build completed, 1 test FAILED, 19194 total actions
//cluster:common_test                                                    PASSED in 3.4s
  Stats over 2 runs: max = 3.4s, min = 3.2s, avg = 3.3s, dev = 0.1s
//cluster:kube-util_test                                                 PASSED in 3.1s
  Stats over 2 runs: max = 3.1s, min = 2.7s, avg = 2.9s, dev = 0.2s
//cluster/gce/cos:go_default_test                                        PASSED in 16.4s
  Stats over 2 runs: max = 16.4s, min = 16.4s, avg = 16.4s, dev = 0.0s
... skipping 1910 lines ...
//third_party/forked/golang/expansion:go_default_test                    PASSED in 9.0s
  Stats over 2 runs: max = 9.0s, min = 8.8s, avg = 8.9s, dev = 0.1s
//third_party/forked/golang/reflect:go_default_test                      PASSED in 6.3s
  Stats over 2 runs: max = 6.3s, min = 6.1s, avg = 6.2s, dev = 0.1s
//third_party/forked/gonum/graph/simple:go_default_test                  PASSED in 7.9s
  Stats over 2 runs: max = 7.9s, min = 7.9s, avg = 7.9s, dev = 0.0s
//pkg/kubelet:go_default_test                                            FAILED in 1 out of 2 in 17.7s
  Stats over 2 runs: max = 17.7s, min = 17.4s, avg = 17.6s, dev = 0.2s
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_2_of_2/test.log

Executed 962 out of 962 tests: 961 tests pass and 1 fails remotely.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
INFO: Build completed, 1 test FAILED, 19194 total actions
+ ../test-infra/hack/coalesce.py
+ exit 3