This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1920 succeeded
Started2020-10-26 20:14
Elapsed9m33s
Revisionmaster

No Test Failures!


Show 1920 Passed Tests

Error lines from build-log.txt

+ bazel test --config=unit --config=remote --remote_instance_name=projects/k8s-prow-builds/instances/default_instance //... //hack:verify-all -- -//build/... -//vendor/...
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: cb330bfd-6a44-48f1-9f5e-512851f585eb
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading:  (1 packages loaded)
Loading: 1 packages loaded
Loading: 149 packages loaded
    currently loading: staging/src/k8s.io/api ... (2 packages)
Loading: 758 packages loaded
    currently loading: staging/src/k8s.io/apimachinery/pkg/util/proxy ... (2 packages)
... skipping 3 lines ...
    currently loading: vendor/k8s.io/client-go/tools/remotecommand ... (4 packages)
Analyzing: 962 targets (4557 packages loaded, 0 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_toolchains/rules/rbe_repo.bzl:491:5: Bazel 2.2.0 is used in rbe_default.
Analyzing: 962 targets (4577 packages loaded, 16104 targets configured)
Analyzing: 962 targets (4577 packages loaded, 29136 targets configured)
Analyzing: 962 targets (4597 packages loaded, 36586 targets configured)
WARNING: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/remote_java_tools_linux/BUILD:671:1: in hdrs attribute of cc_library rule @remote_java_tools_linux//:combiners: Artifact 'external/remote_java_tools_linux/java_tools/src/tools/singlejar/zip_headers.h' is duplicated (through '@remote_java_tools_linux//:transient_bytes' and '@remote_java_tools_linux//:zip_headers'). Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
Analyzing: 962 targets (4599 packages loaded, 36871 targets configured)
Analyzing: 962 targets (4599 packages loaded, 36871 targets configured)
Analyzing: 962 targets (4599 packages loaded, 36871 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages aliases (aliases.go) and complexnums (complexnums.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages p (issue15920.go) and issue25301 (issue25301.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: go: finding module for package domain.name/importdecl
cannot find module providing package domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: exit status 1: go: finding module for package old.com/one
cannot find module providing package old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
... skipping 106 lines ...
[14,818 / 18,050] 288 / 962 tests; Testing //hack:verify-boilerplate [68s (2 actions)] ... (457 actions running)
[15,799 / 18,252] 402 / 962 tests; Testing //staging/src/k8s.io/apiserver/pkg/endpoints/request:go_default_test (run 2 of 2); 60s remote ... (354 actions running)
[16,620 / 18,468] 538 / 962 tests; Testing //staging/src/k8s.io/client-go/rest:go_default_test [67s (2 actions)] ... (357 actions, 355 running)
[17,546 / 18,588] 616 / 962 tests; Testing //staging/src/k8s.io/client-go/rest:go_default_test [108s (2 actions)] ... (319 actions, 318 running)
[18,098 / 18,876] 721 / 962 tests; Testing //cmd/kubeadm/app/phases/certs:go_default_test [74s (2 actions)] ... (247 actions, 245 running)
[18,845 / 19,044] 831 / 962 tests; GoLink cmd/kubelet/app/options/go_default_test_/go_default_test; 99s remote ... (186 actions, 185 running)
FAIL: //pkg/kubelet:go_default_test (run 1 of 2) (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_1_of_2/test.log)
INFO: From Testing //pkg/kubelet:go_default_test (run 1 of 2):
==================== Test output for //pkg/kubelet:go_default_test (run 1 of 2):
E1026 20:22:19.332374      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 20:22:19.333523      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 20:22:19.348772      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 20:22:19.351054      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 20:22:19.359450      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 20:22:19.361732      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 20:22:20.378116      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:33107/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 20:22:21.379127      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:33107/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1026 20:22:22.380169      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:33107/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 20:22:23.380996      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:33107/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 20:22:24.381922      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:33107/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
==================
WARNING: DATA RACE
Read at 0x00c00082d243 by goroutine 97:
  testing.(*common).logDepth()
      GOROOT/src/testing/testing.go:736 +0xa9
  testing.(*common).log()
... skipping 40 lines ...
      vendor/github.com/cilium/ebpf/syscalls.go:188 +0x2b0
  runtime.doInit()
      GOROOT/src/runtime/proc.go:5625 +0x89
  k8s.io/kubernetes/vendor/github.com/cilium/ebpf/internal/btf.init()
      vendor/github.com/cilium/ebpf/internal/btf/btf.go:656 +0x18f
==================
E1026 20:22:24.385356      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 20:22:24.385834      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 20:22:24.401336      23 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 20:21:54.401138307 +0000 UTC m=-24.565650156 LastTransitionTime:2020-10-26 20:21:54.401138307 +0000 UTC m=-24.565650156 Reason:KubeletNotReady Message:container runtime is down}
E1026 20:22:24.408480      23 kubelet.go:2155] Container runtime sanity check failed: injected runtime status error
E1026 20:22:24.415843      23 kubelet.go:2159] Container runtime status is nil
E1026 20:22:24.422979      23 kubelet.go:2168] Container runtime network not ready: <nil>
E1026 20:22:24.423124      23 kubelet.go:2179] Container runtime not ready: <nil>
E1026 20:22:24.431041      23 kubelet.go:2179] Container runtime not ready: RuntimeReady=false reason: message:
E1026 20:22:24.445638      23 kubelet.go:2168] Container runtime network not ready: NetworkReady=false reason: message:
I1026 20:22:24.446203      23 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 20:22:24.408465182 +0000 UTC m=+5.441676682 LastTransitionTime:2020-10-26 20:22:24.408465182 +0000 UTC m=+5.441676682 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E1026 20:22:24.458675      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 20:22:24.459019      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 20:22:24.459320      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 20:22:24.459441      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 20:22:24.459882      23 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 20:22:24.466907      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 20:22:24.468009      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 20:22:24.491323      23 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
I1026 20:22:24.609671      23 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 20:22:24.610158      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 20:22:24.610231      23 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
I1026 20:22:24.613361      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 20:22:24.614628      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 20:22:24.614678      23 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
I1026 20:22:24.617798      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 20:22:24.617840      23 kubelet_node_status.go:249] Controller attach-detach setting changed to true; updating existing Node
E1026 20:22:24.620775      23 kubelet_node_status.go:92] Unable to register node "127.0.0.1" with API server: 
E1026 20:22:24.621946      23 kubelet_node_status.go:98] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I1026 20:22:24.623165      23 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 20:22:24.623216      23 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
E1026 20:22:24.624115      23 kubelet_node_status.go:119] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
E1026 20:22:24.637016      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 20:22:24.637807      23 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 20:22:24.650447      23 kubelet_node_status.go:167] Removing now unsupported huge page resource named: hugepages-2Mi
I1026 20:22:24.653282      23 kubelet_node_status.go:181] Zero out resource test.com/resource1 capacity in existing node.
I1026 20:22:24.653426      23 kubelet_node_status.go:181] Zero out resource test.com/resource2 capacity in existing node.
I1026 20:22:24.758410      23 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 20:22:24.758717      23 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
... skipping 12 lines ...
I1026 20:22:24.926016      23 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 20:22:24.926698      23 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 20:22:24.927324      23 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E1026 20:22:24.928982      23 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
I1026 20:22:24.929040      23 kubelet.go:1803] Starting kubelet main sync loop.
E1026 20:22:24.929124      23 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
W1026 20:22:24.943963      23 predicate.go:79] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E1026 20:22:24.947314      23 runtime.go:209] invalid container ID: ""
E1026 20:22:24.947445      23 runtime.go:209] invalid container ID: ""
I1026 20:22:24.953430      23 kubelet.go:1621] Trying to delete pod foo_ns 11111111
W1026 20:22:24.953538      23 kubelet.go:1625] Deleted mirror pod "foo_ns(11111111)" because it is outdated
W1026 20:22:24.993189      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.014442265/pods/pod1uid/volumes" does not exist
W1026 20:22:24.993307      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.014442265/pods/pod1uid/volumes" does not exist
... skipping 4 lines ...
W1026 20:22:25.007916      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.420309341/pods/pod1uid/volumes" does not exist
W1026 20:22:25.008036      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.420309341/pods/pod1uid/volumes" does not exist
E1026 20:22:25.008357      23 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W1026 20:22:25.017895      23 kubelet_getters.go:300] Path "/tmp/kubelet_test.569646267/pods/poduid/volumes" does not exist
I1026 20:22:25.026881      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 20:22:25.026886      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 20:22:25.030484      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 20:22:25.228860      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I1026 20:22:25.229650      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I1026 20:22:25.229654      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I1026 20:22:25.230362      23 reconciler.go:157] Reconciler: start to sync state
I1026 20:22:25.229788      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I1026 20:22:25.331721      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 2 lines ...
I1026 20:22:25.332486      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 20:22:25.333013      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I1026 20:22:25.333011      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") device mount path ""
I1026 20:22:25.627332      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 20:22:25.629827      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 20:22:25.629890      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 20:22:25.632741      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 20:22:25.831310      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I1026 20:22:25.831446      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I1026 20:22:25.832921      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I1026 20:22:25.833406      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I1026 20:22:25.833913      23 reconciler.go:157] Reconciler: start to sync state
I1026 20:22:25.833795      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
... skipping 7 lines ...
I1026 20:22:25.936024      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") DevicePath "/dev/sdb"
I1026 20:22:25.937901      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") device mount path ""
I1026 20:22:25.938250      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") device mount path ""
I1026 20:22:26.230802      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 20:22:26.234402      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 20:22:26.234873      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 20:22:26.238769      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 20:22:26.435520      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 20:22:26.436064      23 reconciler.go:157] Reconciler: start to sync state
I1026 20:22:26.436377      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 20:22:26.537082      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 20:22:26.537471      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 20:22:26.537733      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 20:22:26.834832      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 20:22:26.837581      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 20:22:26.837581      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 20:22:26.840470      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 20:22:27.038986      23 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 20:22:27.039086      23 reconciler.go:157] Reconciler: start to sync state
I1026 20:22:27.039334      23 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 20:22:27.139947      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 20:22:27.140073      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 20:22:27.140192      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
... skipping 3 lines ...
I1026 20:22:27.541236      23 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I1026 20:22:27.641807      23 reconciler.go:333] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 20:22:27.641851      23 operation_generator.go:470] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 20:22:27.688846      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 20:22:27.691046      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 20:22:27.691188      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 20:22:27.693722      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 20:22:27.892041      23 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 20:22:27.892191      23 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 20:22:27.892215      23 reconciler.go:157] Reconciler: start to sync state
I1026 20:22:27.993028      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 20:22:27.993174      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 20:22:27.993265      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 20:22:28.291821      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 20:22:28.294414      23 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 20:22:28.294415      23 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 20:22:28.296757      23 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 20:22:28.495243      23 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 20:22:28.495347      23 reconciler.go:157] Reconciler: start to sync state
I1026 20:22:28.495688      23 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 20:22:28.596099      23 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 20:22:28.596255      23 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 20:22:28.596351      23 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 20:22:28.997218      23 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I1026 20:22:28.997400      23 operation_generator.go:786] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I1026 20:22:29.097737      23 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 20:22:29.097961      23 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I1026 20:22:29.198083      23 reconciler.go:319] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I1026 20:22:29.295662      23 volume_manager.go:277] Shutting down Kubelet Volume Manager
W1026 20:22:29.297113      23 pod_container_deletor.go:79] Container "abc" not found in pod's containers
E1026 20:22:29.332959      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 20:22:29.351606      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 20:22:29.362107      23 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 20:22:29.467233      23 runonce.go:88] Waiting for 1 pods
I1026 20:22:29.467319      23 runonce.go:123] pod "foo_new(12345678)" containers running
I1026 20:22:29.467862      23 runonce.go:102] started pod "foo_new(12345678)"
I1026 20:22:29.467950      23 runonce.go:108] 1 pods started
FAIL
================================================================================
[19,177 / 19,192] 955 / 962 tests, 1 failed; Testing //pkg/volume/csi:go_default_test [133s (2 actions)] ... (13 actions running)
[19,191 / 19,194] 960 / 962 tests, 1 failed; Testing //cmd/kubeadm/app/phases/upgrade:go_default_test (run 1 of 2); 133s remote ... (3 actions, 1 running)
INFO: Elapsed time: 569.076s, Critical Path: 477.12s
INFO: 17269 processes: 15342 remote cache hit, 1927 remote.
INFO: Build completed, 1 test FAILED, 19194 total actions
//cluster:common_test                                                    PASSED in 7.8s
  Stats over 2 runs: max = 7.8s, min = 7.8s, avg = 7.8s, dev = 0.0s
//cluster:kube-util_test                                                 PASSED in 3.6s
  Stats over 2 runs: max = 3.6s, min = 3.6s, avg = 3.6s, dev = 0.0s
//cluster/gce/cos:go_default_test                                        PASSED in 20.9s
  Stats over 2 runs: max = 20.9s, min = 19.9s, avg = 20.4s, dev = 0.5s
... skipping 1910 lines ...
//third_party/forked/golang/expansion:go_default_test                    PASSED in 14.5s
  Stats over 2 runs: max = 14.5s, min = 14.3s, avg = 14.4s, dev = 0.1s
//third_party/forked/golang/reflect:go_default_test                      PASSED in 7.3s
  Stats over 2 runs: max = 7.3s, min = 6.5s, avg = 6.9s, dev = 0.4s
//third_party/forked/gonum/graph/simple:go_default_test                  PASSED in 7.7s
  Stats over 2 runs: max = 7.7s, min = 7.6s, avg = 7.7s, dev = 0.0s
//pkg/kubelet:go_default_test                                            FAILED in 1 out of 2 in 16.5s
  Stats over 2 runs: max = 16.5s, min = 15.8s, avg = 16.2s, dev = 0.3s
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_1_of_2/test.log

Executed 962 out of 962 tests: 961 tests pass and 1 fails remotely.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
INFO: Build completed, 1 test FAILED, 19194 total actions
+ ../test-infra/hack/coalesce.py
+ exit 3