This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 1920 succeeded
Started2020-10-26 07:01
Elapsed9m30s
Revisionmaster

No Test Failures!


Show 1920 Passed Tests

Error lines from build-log.txt

+ bazel test --config=unit --config=remote --remote_instance_name=projects/k8s-prow-builds/instances/default_instance //... //hack:verify-all -- -//build/... -//vendor/...
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Invocation ID: 42fc28d0-9965-4fb6-90e6-f2f0a3b7dc0f
WARNING: Download from https://storage.googleapis.com/k8s-bazel-cache/https://github.com/bazelbuild/rules_docker/releases/download/v0.14.4/rules_docker-v0.14.4.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
Loading:  (1 packages loaded)
Loading: 1 packages loaded
Loading: 350 packages loaded
Loading: 946 packages loaded
    currently loading:  ... (4 packages)
Loading: 1568 packages loaded
... skipping 3 lines ...
Loading: 4557 packages loaded
Analyzing: 962 targets (4557 packages loaded, 0 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_toolchains/rules/rbe_repo.bzl:491:5: Bazel 2.2.0 is used in rbe_default.
Analyzing: 962 targets (4577 packages loaded, 13782 targets configured)
Analyzing: 962 targets (4577 packages loaded, 29130 targets configured)
Analyzing: 962 targets (4597 packages loaded, 36609 targets configured)
WARNING: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/remote_java_tools_linux/BUILD:671:1: in hdrs attribute of cc_library rule @remote_java_tools_linux//:combiners: Artifact 'external/remote_java_tools_linux/java_tools/src/tools/singlejar/zip_headers.h' is duplicated (through '@remote_java_tools_linux//:transient_bytes' and '@remote_java_tools_linux//:zip_headers'). Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
Analyzing: 962 targets (4599 packages loaded, 36865 targets configured)
Analyzing: 962 targets (4599 packages loaded, 36865 targets configured)
Analyzing: 962 targets (4599 packages loaded, 36865 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages issue30628 (issue30628.go) and issue31540 (issue31540.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages issue25301 (issue25301.go) and a (a.go) in /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: go: finding module for package domain.name/importdecl
cannot find module providing package domain.name/importdecl: module domain.name/importdecl: reading https://proxy.golang.org/domain.name/importdecl/@v/list: 410 Gone
	server response: not found: domain.name/importdecl@latest: unrecognized import path "domain.name/importdecl": https fetch: Get "https://domain.name/importdecl?go-get=1": dial tcp: lookup domain.name on 8.8.8.8:53: no such host
gazelle: finding module path for import old.com/one: exit status 1: go: finding module for package old.com/one
cannot find module providing package old.com/one: module old.com/one: reading https://proxy.golang.org/old.com/one/@v/list: 410 Gone
	server response: not found: old.com/one@latest: unrecognized import path "old.com/one": https fetch: Get "http://www.old.com/one?go-get=1": redirected from secure URL https://old.com/one?go-get=1 to insecure URL http://www.old.com/one?go-get=1
gazelle: finding module path for import titanic.biz/bar: exit status 1: go: finding module for package titanic.biz/bar
cannot find module providing package titanic.biz/bar: module titanic.biz/bar: reading https://proxy.golang.org/titanic.biz/bar/@v/list: 410 Gone
	server response: not found: titanic.biz/bar@latest: unrecognized import path "titanic.biz/bar": parsing titanic.biz/bar: XML syntax error on line 1: expected attribute name in element
gazelle: finding module path for import titanic.biz/foo: exit status 1: go: finding module for package titanic.biz/foo
cannot find module providing package titanic.biz/foo: module titanic.biz/foo: reading https://proxy.golang.org/titanic.biz/foo/@v/list: 410 Gone
	server response: not found: titanic.biz/foo@latest: unrecognized import path "titanic.biz/foo": parsing titanic.biz/foo: XML syntax error on line 1: expected attribute name in element
gazelle: finding module path for import fruit.io/pear: exit status 1: go: finding module for package fruit.io/pear
cannot find module providing package fruit.io/pear: module fruit.io/pear: reading https://proxy.golang.org/fruit.io/pear/@v/list: 410 Gone
	server response: not found: fruit.io/pear@latest: unrecognized import path "fruit.io/pear": https fetch: Get "https://fruit.io/pear?go-get=1": x509: certificate is valid for *.gridserver.com, gridserver.com, not fruit.io
gazelle: finding module path for import fruit.io/banana: exit status 1: go: finding module for package fruit.io/banana
cannot find module providing package fruit.io/banana: module fruit.io/banana: reading https://proxy.golang.org/fruit.io/banana/@v/list: 410 Gone
	server response: not found: fruit.io/banana@latest: unrecognized import path "fruit.io/banana": https fetch: Get "https://fruit.io/banana?go-get=1": x509: certificate is valid for *.gridserver.com, gridserver.com, not fruit.io
... skipping 94 lines ...
[15,685 / 18,150] 433 / 962 tests; GoLink staging/src/k8s.io/client-go/plugin/pkg/client/auth/gcp/go_default_test_/go_default_test; 55s remote ... (136 actions, 133 running)
[16,618 / 18,322] 477 / 962 tests; Testing //staging/src/k8s.io/apiserver/pkg/storage/tests:go_default_test [43s (2 actions)] ... (393 actions, 392 running)
[17,218 / 18,546] 538 / 962 tests; GoLink test/e2e/e2e.test_/e2e.test; 80s remote ... (435 actions running)
[18,003 / 18,782] 635 / 962 tests; GoLink test/e2e/e2e.test_/e2e.test; 124s remote ... (385 actions, 383 running)
[18,739 / 18,900] 808 / 962 tests; GoLink staging/src/k8s.io/kubectl/pkg/cmd/rollout/go_default_test_/go_default_test; 99s remote ... (161 actions, 159 running)
[19,148 / 19,194] 937 / 962 tests; Testing //pkg/volume/csi:go_default_test [130s (2 actions)] ... (46 actions running)
FAIL: //pkg/kubelet:go_default_test (run 1 of 2) (see /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_1_of_2/test.log)
INFO: From Testing //pkg/kubelet:go_default_test (run 1 of 2):
==================== Test output for //pkg/kubelet:go_default_test (run 1 of 2):
E1026 07:09:30.452783      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 07:09:30.454150      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 07:09:30.471787      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 07:09:30.473082      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 07:09:30.486906      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 07:09:30.490273      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 07:09:31.510288      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:41217/api/v1/nodes/127.0.0.1?resourceVersion=0&timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 07:09:32.511504      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:41217/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1026 07:09:33.512404      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:41217/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
E1026 07:09:34.513161      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:41217/api/v1/nodes/127.0.0.1?timeout=1s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
E1026 07:09:35.514450      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": Get "http://127.0.0.1:41217/api/v1/nodes/127.0.0.1?timeout=1s": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
==================
WARNING: DATA RACE
Read at 0x00c0009d36c3 by goroutine 97:
  testing.(*common).logDepth()
      GOROOT/src/testing/testing.go:736 +0xa9
  testing.(*common).log()
... skipping 40 lines ...
      vendor/github.com/cilium/ebpf/syscalls.go:188 +0x2b0
  runtime.doInit()
      GOROOT/src/runtime/proc.go:5625 +0x89
  k8s.io/kubernetes/vendor/github.com/cilium/ebpf/internal/btf.init()
      vendor/github.com/cilium/ebpf/internal/btf/btf.go:656 +0x18f
==================
E1026 07:09:35.519381      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 07:09:35.519978      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 07:09:35.543214      24 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 07:09:05.542943813 +0000 UTC m=-24.457667978 LastTransitionTime:2020-10-26 07:09:05.542943813 +0000 UTC m=-24.457667978 Reason:KubeletNotReady Message:container runtime is down}
E1026 07:09:35.553947      24 kubelet.go:2155] Container runtime sanity check failed: injected runtime status error
E1026 07:09:35.563814      24 kubelet.go:2159] Container runtime status is nil
E1026 07:09:35.572986      24 kubelet.go:2168] Container runtime network not ready: <nil>
E1026 07:09:35.573119      24 kubelet.go:2179] Container runtime not ready: <nil>
E1026 07:09:35.581011      24 kubelet.go:2179] Container runtime not ready: RuntimeReady=false reason: message:
E1026 07:09:35.600437      24 kubelet.go:2168] Container runtime network not ready: NetworkReady=false reason: message:
I1026 07:09:35.601009      24 setters.go:576] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-10-26 07:09:35.553929984 +0000 UTC m=+5.553318193 LastTransitionTime:2020-10-26 07:09:35.553929984 +0000 UTC m=+5.553318193 Reason:KubeletNotReady Message:runtime network not ready: NetworkReady=false reason: message:}
E1026 07:09:35.614837      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 07:09:35.615418      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 07:09:35.615821      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 07:09:35.616178      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
E1026 07:09:35.616544      24 kubelet_node_status.go:442] Error updating node status, will retry: error getting node "127.0.0.1": nodes "127.0.0.1" not found
I1026 07:09:35.626680      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
E1026 07:09:35.644805      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 07:09:35.666207      24 kubelet_network.go:77] Setting Pod CIDR:  -> 10.0.0.0/24,2000::/10
I1026 07:09:35.794640      24 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 07:09:35.795148      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 07:09:35.795219      24 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
I1026 07:09:35.798596      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 07:09:35.799909      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 07:09:35.799968      24 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
I1026 07:09:35.803290      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 07:09:35.803350      24 kubelet_node_status.go:249] Controller attach-detach setting changed to true; updating existing Node
E1026 07:09:35.806249      24 kubelet_node_status.go:92] Unable to register node "127.0.0.1" with API server: 
E1026 07:09:35.807292      24 kubelet_node_status.go:98] Unable to register node "127.0.0.1" with API server: error getting existing node: 
I1026 07:09:35.808324      24 kubelet_node_status.go:108] Node 127.0.0.1 was previously registered
I1026 07:09:35.808374      24 kubelet_node_status.go:246] Controller attach-detach setting changed to false; updating existing Node
E1026 07:09:35.809293      24 kubelet_node_status.go:119] Unable to reconcile node "127.0.0.1" with API server: error updating node: failed to patch status "{\"metadata\":{\"annotations\":null}}" for node "127.0.0.1": 
E1026 07:09:35.815885      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 07:09:35.824762      24 plugin_manager.go:114] Starting Kubelet Plugin Manager
I1026 07:09:35.837581      24 kubelet_node_status.go:167] Removing now unsupported huge page resource named: hugepages-2Mi
I1026 07:09:35.840213      24 kubelet_node_status.go:181] Zero out resource test.com/resource1 capacity in existing node.
I1026 07:09:35.840378      24 kubelet_node_status.go:181] Zero out resource test.com/resource2 capacity in existing node.
I1026 07:09:35.943808      24 kubelet_node_status.go:70] Attempting to register node 127.0.0.1
I1026 07:09:35.943972      24 kubelet_node_status.go:73] Successfully registered node 127.0.0.1
... skipping 12 lines ...
I1026 07:09:36.116922      24 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 07:09:36.117429      24 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
I1026 07:09:36.117939      24 kubelet_resources.go:45] allocatable: map[cpu:{{6 0} {<nil>} 6 DecimalSI} memory:{{4294967296 0} {<nil>} 4Gi BinarySI}]
E1026 07:09:36.119733      24 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
I1026 07:09:36.119828      24 kubelet.go:1803] Starting kubelet main sync loop.
E1026 07:09:36.119942      24 kubelet.go:1883] Update channel is closed. Exiting the sync loop.
W1026 07:09:36.137313      24 predicate.go:79] Failed to admit pod failedpod_foo(4) - Update plugin resources failed due to Allocation failed, which is unexpected.
E1026 07:09:36.147611      24 runtime.go:209] invalid container ID: ""
E1026 07:09:36.148263      24 runtime.go:209] invalid container ID: ""
I1026 07:09:36.159085      24 kubelet.go:1621] Trying to delete pod foo_ns 11111111
W1026 07:09:36.159199      24 kubelet.go:1625] Deleted mirror pod "foo_ns(11111111)" because it is outdated
E1026 07:09:36.207236      24 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume paths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W1026 07:09:36.209625      24 kubelet_getters.go:300] Path "/tmp/kubelet_test.457601561/pods/pod1uid/volumes" does not exist
... skipping 4 lines ...
E1026 07:09:36.212238      24 kubelet_volumes.go:154] orphaned pod "pod1uid" found, but volume subpaths are still present on disk : There were a total of 1 errors similar to this. Turn up verbosity to see them.
W1026 07:09:36.217232      24 kubelet_getters.go:300] Path "/tmp/kubelet_test.947699805/pods/pod1uid/volumes" does not exist
W1026 07:09:36.217360      24 kubelet_getters.go:300] Path "/tmp/kubelet_test.947699805/pods/pod1uid/volumes" does not exist
W1026 07:09:36.239428      24 kubelet_getters.go:300] Path "/tmp/kubelet_test.753061285/pods/poduid/volumes" does not exist
I1026 07:09:36.246675      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 07:09:36.247163      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 07:09:36.250534      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 07:09:36.448205      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") 
I1026 07:09:36.448797      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
I1026 07:09:36.449174      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") from node "127.0.0.1" 
I1026 07:09:36.448857      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") 
I1026 07:09:36.449826      24 reconciler.go:157] Reconciler: start to sync state
I1026 07:09:36.550665      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device1") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
... skipping 2 lines ...
I1026 07:09:36.551655      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 07:09:36.551815      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 07:09:36.551923      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "foo" (UID: "12345678") device mount path ""
I1026 07:09:36.846950      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 07:09:36.849802      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 07:09:36.849912      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 07:09:36.852971      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 07:09:37.051157      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") 
I1026 07:09:37.051766      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") 
I1026 07:09:37.052027      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol3" (UniqueName: "fake/fake-device3") pod "pod3" (UID: "pod3uid") 
I1026 07:09:37.052073      24 reconciler.go:157] Reconciler: start to sync state
I1026 07:09:37.052383      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol3" (UniqueName: "fake/fake-device3") from node "127.0.0.1" 
I1026 07:09:37.052448      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device1") from node "127.0.0.1" 
... skipping 7 lines ...
I1026 07:09:37.155407      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") DevicePath "/dev/sdb"
I1026 07:09:37.155552      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol2" (UniqueName: "fake/fake-device2") pod "pod2" (UID: "pod2uid") device mount path ""
I1026 07:09:37.155538      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device1") pod "pod1" (UID: "pod1uid") device mount path ""
I1026 07:09:37.450728      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 07:09:37.453644      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 07:09:37.455663      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 07:09:37.457183      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 07:09:37.557337      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 07:09:37.557452      24 reconciler.go:157] Reconciler: start to sync state
I1026 07:09:37.557512      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 07:09:37.658411      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 07:09:37.658554      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 07:09:37.658638      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 07:09:37.754075      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 07:09:37.756453      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 07:09:37.756506      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 07:09:37.759336      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 07:09:37.957817      24 reconciler.go:244] operationExecutor.AttachVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 07:09:37.957928      24 reconciler.go:157] Reconciler: start to sync state
I1026 07:09:37.958075      24 operation_generator.go:360] AttachVolume.Attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") from node "127.0.0.1" 
I1026 07:09:38.058727      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/vdb-test"
I1026 07:09:38.058876      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 07:09:38.058953      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 07:09:38.459876      24 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I1026 07:09:38.460187      24 operation_generator.go:786] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I1026 07:09:38.560493      24 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 07:09:38.560687      24 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I1026 07:09:38.661119      24 reconciler.go:333] operationExecutor.DetachVolume started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 07:09:38.661183      24 operation_generator.go:470] DetachVolume.Detach succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
E1026 07:09:38.685481      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: unhandled watch: testing.WatchActionImpl{ActionImpl:testing.ActionImpl{Namespace:"", Verb:"watch", Resource:schema.GroupVersionResource{Group:"storage.k8s.io", Version:"v1", Resource:"csidrivers"}, Subresource:""}, WatchRestrictions:testing.WatchRestrictions{Labels:labels.internalSelector(nil), Fields:fields.andTerm{}, ResourceVersion:""}}
I1026 07:09:38.758301      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 07:09:38.760916      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 07:09:38.761836      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 07:09:38.763552      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 07:09:38.962590      24 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 07:09:38.962777      24 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 07:09:38.962854      24 reconciler.go:157] Reconciler: start to sync state
I1026 07:09:39.063520      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 07:09:39.063849      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 07:09:39.063948      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 07:09:39.361466      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
I1026 07:09:39.363801      24 volume_manager.go:266] Starting Kubelet Volume Manager
I1026 07:09:39.364064      24 desired_state_of_world_populator.go:142] Desired state populator starts to run
E1026 07:09:39.366561      24 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: no reaction implemented for {{ list storage.k8s.io/v1, Resource=csidrivers } storage.k8s.io/v1, Kind=CSIDriver  { }}
I1026 07:09:39.565333      24 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") 
I1026 07:09:39.565441      24 reconciler.go:157] Reconciler: start to sync state
I1026 07:09:39.565483      24 operation_generator.go:1346] Controller attach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device path: "fake/path"
I1026 07:09:39.666405      24 operation_generator.go:556] MountVolume.WaitForAttach entering for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "fake/path"
I1026 07:09:39.666558      24 operation_generator.go:565] MountVolume.WaitForAttach succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") DevicePath "/dev/sdb"
I1026 07:09:39.666653      24 operation_generator.go:594] MountVolume.MountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") pod "foo" (UID: "12345678") device mount path ""
I1026 07:09:39.866786      24 reconciler.go:196] operationExecutor.UnmountVolume started for volume "vol1" (UniqueName: "fake/fake-device") pod "12345678" (UID: "12345678") 
I1026 07:09:39.867573      24 operation_generator.go:786] UnmountVolume.TearDown succeeded for volume "fake/fake-device" (OuterVolumeSpecName: "vol1") pod "12345678" (UID: "12345678"). InnerVolumeSpecName "vol1". PluginName "fake", VolumeGidValue ""
I1026 07:09:39.967453      24 reconciler.go:312] operationExecutor.UnmountDevice started for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" 
I1026 07:09:39.967952      24 operation_generator.go:880] UnmountDevice succeeded for volume "vol1" %!(EXTRA string=UnmountDevice succeeded for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" )
I1026 07:09:40.067839      24 reconciler.go:319] Volume detached for volume "vol1" (UniqueName: "fake/fake-device") on node "127.0.0.1" DevicePath "/dev/sdb"
I1026 07:09:40.318359      24 volume_manager.go:277] Shutting down Kubelet Volume Manager
W1026 07:09:40.320081      24 pod_container_deletor.go:79] Container "abc" not found in pod's containers
E1026 07:09:40.453742      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 07:09:40.472868      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
E1026 07:09:40.491292      24 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get root cgroup stats: failed to get cgroup stats for "/": unexpected number of containers: 0
I1026 07:09:40.494753      24 runonce.go:88] Waiting for 1 pods
I1026 07:09:40.494823      24 runonce.go:123] pod "foo_new(12345678)" containers running
I1026 07:09:40.495468      24 runonce.go:102] started pod "foo_new(12345678)"
I1026 07:09:40.495623      24 runonce.go:108] 1 pods started
FAIL
================================================================================
[19,190 / 19,194] 960 / 962 tests, 1 failed; Testing //pkg/controller/volume/scheduling:go_default_test [85s (2 actions)] ... (4 actions running)
INFO: Elapsed time: 565.338s, Critical Path: 465.47s
INFO: 17269 processes: 15342 remote cache hit, 1927 remote.
INFO: Build completed, 1 test FAILED, 19194 total actions
//cluster:common_test                                                    PASSED in 14.7s
  Stats over 2 runs: max = 14.7s, min = 14.7s, avg = 14.7s, dev = 0.0s
//cluster:kube-util_test                                                 PASSED in 15.0s
  Stats over 2 runs: max = 15.0s, min = 14.7s, avg = 14.8s, dev = 0.2s
//cluster/gce/cos:go_default_test                                        PASSED in 33.5s
  Stats over 2 runs: max = 33.5s, min = 33.5s, avg = 33.5s, dev = 0.0s
... skipping 1910 lines ...
//third_party/forked/golang/expansion:go_default_test                    PASSED in 7.0s
  Stats over 2 runs: max = 7.0s, min = 6.8s, avg = 6.9s, dev = 0.1s
//third_party/forked/golang/reflect:go_default_test                      PASSED in 4.4s
  Stats over 2 runs: max = 4.4s, min = 4.1s, avg = 4.3s, dev = 0.1s
//third_party/forked/gonum/graph/simple:go_default_test                  PASSED in 5.0s
  Stats over 2 runs: max = 5.0s, min = 5.0s, avg = 5.0s, dev = 0.0s
//pkg/kubelet:go_default_test                                            FAILED in 1 out of 2 in 16.5s
  Stats over 2 runs: max = 16.5s, min = 16.1s, avg = 16.3s, dev = 0.2s
  /bazel-scratch/.cache/bazel/_bazel_root/7989b31489f31aee54f32688da2f0120/execroot/io_k8s_kubernetes/bazel-out/k8-fastbuild/testlogs/pkg/kubelet/go_default_test/run_1_of_2/test.log

Executed 962 out of 962 tests: 961 tests pass and 1 fails remotely.
There were tests whose specified size is too big. Use the --test_verbose_timeout_warnings command line option to see which ones these are.
INFO: Build completed, 1 test FAILED, 19194 total actions
+ ../test-infra/hack/coalesce.py
+ exit 3