This job view page is being replaced by Spyglass soon. Check out the new job view.
PRNamanl2001: adding `--ssh-key` and `--ssh-user` for kubetest2
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-10-26 21:09
Elapsed53m9s
Revisiond72179625971ba4db89362f44b62703a08426f8f
Refs 105637

No Test Failures!


Error lines from build-log.txt

... skipping 104 lines ...
I1026 21:11:49.392101   12685 remote.go:41] Building archive...
I1026 21:11:49.392390   12685 build.go:42] Building k8s binaries...
I1026 21:11:49.639879   12685 run_remote.go:579] Creating instance {image:ubuntu-gke-2004-1-20-v20210401 imageDesc:ubuntu-gke-2004-1-20-v20210401 kernelArguments:[] project:ubuntu-os-gke-cloud resources:{Accelerators:[]} metadata:<nil> machine:n1-standard-2 tests:[]} with service account "509175819738-compute@developer.gserviceaccount.com"
I1026 21:11:49.640906   12685 run_remote.go:579] Creating instance {image:cos-89-16108-534-17 imageDesc:cos-89-16108-534-17 kernelArguments:[] project:cos-cloud resources:{Accelerators:[]} metadata:0xc0005f21c0 machine:n1-standard-2 tests:[]} with service account "509175819738-compute@developer.gserviceaccount.com"
I1026 21:11:49.646389   12685 run_remote.go:579] Creating instance {image:cos-81-12871-1317-7 imageDesc:cos-81-12871-1317-7 kernelArguments:[] project:cos-cloud resources:{Accelerators:[{Type:nvidia-tesla-k80 Count:2}]} metadata:0xc000533500 machine:n1-standard-2 tests:[]} with service account "509175819738-compute@developer.gserviceaccount.com"
I1026 21:11:50.281240   12685 run_remote.go:856] Deleting instance ""
E1026 21:11:50.284355   12685 run_remote.go:859] Error deleting instance "": googleapi: got HTTP response code 404 with body: <!DOCTYPE html>
<html lang=en>
  <meta charset=utf-8>
  <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
  <title>Error 404 (Not Found)!!1</title>
  <style>
    *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}
  </style>
  <a href=//www.google.com/><span id=logo aria-label=Google></span></a>
  <p><b>404.</b> <ins>That’s an error.</ins>
  <p>The requested URL <code>/compute/beta/projects/k8s-jkns-e2e-gce/zones/us-central1-b/instances/?alt=json&amp;prettyPrint=false</code> was not found on this server.  <ins>That’s all we know.</ins>

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>                              START TEST                                >
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Start Test Suite on Host 

Failure Finished Test Suite on Host 
unable to create gce instance with running docker daemon for image cos-81-12871-1317-7.  could not create instance n1-standard-2-cos-81-12871-1317-7-49751e4f: API error: googleapi: Error 404: The resource 'projects/k8s-jkns-e2e-gce/zones/us-central1-b/acceleratorTypes/nvidia-tesla-k80' was not found, notFound
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
<                              FINISH TEST                               <
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

+++ [1026 21:11:54] Building go targets for linux/amd64:
    cmd/kubelet
    test/e2e_node/e2e_node.test
    vendor/github.com/onsi/ginkgo/ginkgo
    cluster/gce/gci/mounter
> non-static build: k8s.io/kubernetes/cmd/kubelet k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo k8s.io/kubernetes/cluster/gce/gci/mounter
I1026 21:12:11.947112   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e docker -e containerd -e crio']
I1026 21:12:12.382360   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e docker -e containerd -e crio']
E1026 21:12:19.634094   12685 ssh.go:120] failed to run SSH command: out: ssh: connect to host 35.238.54.135 port 22: Connection refused

, err: exit status 255
I1026 21:12:40.055932   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e docker -e containerd -e crio']
I1026 21:12:40.387371   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo ls /var/lib/cloud/instance/boot-finished]
E1026 21:12:40.733365   12685 ssh.go:120] failed to run SSH command: out: ls: cannot access '/var/lib/cloud/instance/boot-finished': No such file or directory
, err: exit status 2
I1026 21:13:00.734400   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo ls /var/lib/cloud/instance/boot-finished]
E1026 21:13:01.044919   12685 ssh.go:120] failed to run SSH command: out: ls: cannot access '/var/lib/cloud/instance/boot-finished': No such file or directory
, err: exit status 2
I1026 21:13:21.046429   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo ls /var/lib/cloud/instance/boot-finished]
I1026 21:17:04.736177   12685 remote.go:71] Staging test binaries on "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"
I1026 21:17:04.736330   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- mkdir /tmp/node-e2e-20211026T211704]
I1026 21:17:04.736438   12685 remote.go:71] Staging test binaries on "n1-standard-2-cos-89-16108-534-17-1e7097e6"
I1026 21:17:04.736525   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- mkdir /tmp/node-e2e-20211026T211704]
I1026 21:17:04.855997   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/e2e_node_test.tar.gz prow@35.238.54.135:/tmp/node-e2e-20211026T211704/]
I1026 21:17:05.095813   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/e2e_node_test.tar.gz prow@34.68.40.205:/tmp/node-e2e-20211026T211704/]
I1026 21:17:05.371511   12685 remote.go:98] Extracting tar on "n1-standard-2-cos-89-16108-534-17-1e7097e6"
I1026 21:17:05.371574   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sh -c 'cd /tmp/node-e2e-20211026T211704 && tar -xzvf ./e2e_node_test.tar.gz']
I1026 21:17:05.869523   12685 remote.go:98] Extracting tar on "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"
I1026 21:17:05.869576   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sh -c 'cd /tmp/node-e2e-20211026T211704 && tar -xzvf ./e2e_node_test.tar.gz']
I1026 21:17:07.601889   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- mkdir /tmp/node-e2e-20211026T211704/results]
I1026 21:17:07.751606   12685 remote.go:113] Running test on "n1-standard-2-cos-89-16108-534-17-1e7097e6"
I1026 21:17:07.751644   12685 utils.go:54] Install CNI on "n1-standard-2-cos-89-16108-534-17-1e7097e6"
I1026 21:17:07.751680   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20211026T211704/cni/bin ; curl -s -L https://storage.googleapis.com/k8s-artifacts-cni/release/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz | tar -xz -C /tmp/node-e2e-20211026T211704/cni/bin']
I1026 21:17:08.287644   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- mkdir /tmp/node-e2e-20211026T211704/results]
I1026 21:17:08.516097   12685 remote.go:113] Running test on "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"
I1026 21:17:08.516139   12685 utils.go:54] Install CNI on "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"
I1026 21:17:08.516179   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20211026T211704/cni/bin ; curl -s -L https://storage.googleapis.com/k8s-artifacts-cni/release/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz | tar -xz -C /tmp/node-e2e-20211026T211704/cni/bin']
I1026 21:17:08.874269   12685 utils.go:67] Adding CNI configuration on "n1-standard-2-cos-89-16108-534-17-1e7097e6"
I1026 21:17:08.874382   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20211026T211704/cni/net.d ; echo '"'"'{
  "name": "mynet",
  "type": "bridge",
  "bridge": "mynet0",
  "isDefaultGateway": true,
  "forceAddress": false,
  "ipMasq": true,
... skipping 2 lines ...
    "type": "host-local",
    "subnet": "10.10.0.0/16"
  }
}
'"'"' > /tmp/node-e2e-20211026T211704/cni/net.d/mynet.conf']
I1026 21:17:08.998744   12685 utils.go:81] Configure iptables firewall rules on "n1-standard-2-cos-89-16108-534-17-1e7097e6"
I1026 21:17:08.998829   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo sh -c 'iptables -I INPUT 1 -w -p tcp -j ACCEPT&&iptables -I INPUT 1 -w -p udp -j ACCEPT&&iptables -I INPUT 1 -w -p icmp -j ACCEPT&&iptables -I FORWARD 1 -w -p tcp -j ACCEPT&&iptables -I FORWARD 1 -w -p udp -j ACCEPT&&iptables -I FORWARD 1 -w -p icmp -j ACCEPT']
I1026 21:17:09.169120   12685 utils.go:102] Killing any existing node processes on "n1-standard-2-cos-89-16108-534-17-1e7097e6"
I1026 21:17:09.169163   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo sh -c 'pkill kubelet ; pkill kube-apiserver ; pkill etcd ; pkill e2e_node.test']
E1026 21:17:09.301940   12685 ssh.go:120] failed to run SSH command: out: , err: exit status 1
I1026 21:17:09.302011   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo cat /etc/os-release]
I1026 21:17:09.418554   12685 node_e2e.go:93] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
I1026 21:17:09.418596   12685 node_e2e.go:183] Starting tests on "n1-standard-2-cos-89-16108-534-17-1e7097e6"
I1026 21:17:09.418633   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo sh -c 'cd /tmp/node-e2e-20211026T211704 && timeout -k 30s 2700.000000s ./ginkgo  -focus="\[Serial\]"  -skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]"  -untilItFails=false  ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-cos-89-16108-534-17-1e7097e6 --report-dir=/tmp/node-e2e-20211026T211704/results --report-prefix=cos-stable1 --image-description="cos-89-16108-534-17" --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kubelet-flags=--kernel-memcg-notification=true --kubelet-flags="--cluster-domain=cluster.local" --dns-domain="cluster.local" --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"']
I1026 21:17:09.771543   12685 utils.go:67] Adding CNI configuration on "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"
I1026 21:17:09.771626   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20211026T211704/cni/net.d ; echo '"'"'{
  "name": "mynet",
  "type": "bridge",
  "bridge": "mynet0",
  "isDefaultGateway": true,
  "forceAddress": false,
  "ipMasq": true,
... skipping 2 lines ...
    "type": "host-local",
    "subnet": "10.10.0.0/16"
  }
}
'"'"' > /tmp/node-e2e-20211026T211704/cni/net.d/mynet.conf']
I1026 21:17:10.018195   12685 utils.go:81] Configure iptables firewall rules on "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"
I1026 21:17:10.018241   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo sh -c 'iptables -I INPUT 1 -w -p tcp -j ACCEPT&&iptables -I INPUT 1 -w -p udp -j ACCEPT&&iptables -I INPUT 1 -w -p icmp -j ACCEPT&&iptables -I FORWARD 1 -w -p tcp -j ACCEPT&&iptables -I FORWARD 1 -w -p udp -j ACCEPT&&iptables -I FORWARD 1 -w -p icmp -j ACCEPT']
I1026 21:17:10.370427   12685 utils.go:102] Killing any existing node processes on "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"
I1026 21:17:10.370473   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo sh -c 'pkill kubelet ; pkill kube-apiserver ; pkill etcd ; pkill e2e_node.test']
E1026 21:17:10.766388   12685 ssh.go:120] failed to run SSH command: out: , err: exit status 1
I1026 21:17:10.766451   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo cat /etc/os-release]
I1026 21:17:11.012763   12685 node_e2e.go:183] Starting tests on "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"
I1026 21:17:11.012808   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo sh -c 'cd /tmp/node-e2e-20211026T211704 && timeout -k 30s 2700.000000s ./ginkgo  -focus="\[Serial\]"  -skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]"  -untilItFails=false  ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --report-dir=/tmp/node-e2e-20211026T211704/results --report-prefix=ubuntu --image-description="ubuntu-gke-2004-1-20-v20210401" --kubelet-flags=--kernel-memcg-notification=true --kubelet-flags="--cluster-domain=cluster.local" --dns-domain="cluster.local" --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"']
E1026 22:02:09.973891   12685 ssh.go:120] failed to run SSH command: out: Flag --logtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
Oct 26 21:17:09.639: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
W1026 21:17:09.639811    1509 test_context.go:457] Unable to find in-cluster config, using default host : https://127.0.0.1:6443
I1026 21:17:09.639856    1509 test_context.go:474] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
STEP: Enabling support for Kubelet Plugins Watcher
I1026 21:17:09.723247    1509 mount_linux.go:222] Detected OS with systemd
I1026 21:17:09.731483    1509 mount_linux.go:222] Detected OS with systemd
... skipping 54 lines ...
Oct 26 21:17:10.438: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
I1026 21:17:10.494327    1509 image_list.go:171] Pre-pulling images with docker [docker.io/nfvpe/sriov-device-plugin:v3.1 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/e2e-test-images/agnhost:2.33 k8s.gcr.io/e2e-test-images/busybox:1.29-2 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 k8s.gcr.io/e2e-test-images/ipc-utils:1.3 k8s.gcr.io/e2e-test-images/nginx:1.14-2 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1 k8s.gcr.io/e2e-test-images/nonewprivs:1.3 k8s.gcr.io/e2e-test-images/nonroot:1.2 k8s.gcr.io/e2e-test-images/perl:5.26 k8s.gcr.io/e2e-test-images/volume/gluster:1.3 k8s.gcr.io/e2e-test-images/volume/nfs:1.3 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa k8s.gcr.io/pause:3.6 k8s.gcr.io/stress:v1 quay.io/kubevirt/device-plugin-kvm]
I1026 21:18:40.845260    1509 server.go:102] Starting server "services" with command "/tmp/node-e2e-20211026T211704/e2e_node.test --run-services-mode --bearer-token=2-5_SQKd_pSnMc2X --test.timeout=24h0m0s --ginkgo.seed=1635283029 --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\] --ginkgo.slowSpecThreshold=5.00000 --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-cos-89-16108-534-17-1e7097e6 --report-dir=/tmp/node-e2e-20211026T211704/results --report-prefix=cos-stable1 --image-description=cos-89-16108-534-17 --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kubelet-flags=--kernel-memcg-notification=true --kubelet-flags=--cluster-domain=cluster.local --dns-domain=cluster.local --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags=--cgroups-per-qos=true --cgroup-root=/"
I1026 21:18:40.845305    1509 util.go:48] Running readiness check for service "services"
I1026 21:18:40.845373    1509 server.go:130] Output file for server "services": /tmp/node-e2e-20211026T211704/results/services.log
I1026 21:18:40.845822    1509 server.go:160] Waiting for server "services" start command to complete
W1026 21:18:44.699599    1509 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
I1026 21:18:45.701848    1509 services.go:70] Node services started.
I1026 21:18:45.701866    1509 kubelet.go:100] Starting kubelet
W1026 21:18:45.701949    1509 feature_gate.go:235] Setting deprecated feature gate DynamicKubeletConfig=true. It will be removed in a future release.
I1026 21:18:45.701969    1509 feature_gate.go:245] feature gates: &{map[DynamicKubeletConfig:true LocalStorageCapacityIsolation:true]}
I1026 21:18:45.704187    1509 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true -p StandardError=file:/tmp/node-e2e-20211026T211704/results/kubelet.log --unit=kubelet-20211026T211704.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-cos-89-16108-534-17-1e7097e6 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/"
I1026 21:18:45.704313    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:18:45.704369    1509 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20211026T211704/results/kubelet.log
I1026 21:18:45.704678    1509 server.go:171] Running health check for service "kubelet"
I1026 21:18:45.704697    1509 util.go:48] Running readiness check for service "kubelet"
W1026 21:18:46.704980    1509 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W1026 21:18:46.705051    1509 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I1026 21:18:47.706588    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:18:47.706940    1509 services.go:80] Kubelet started.
I1026 21:18:47.706961    1509 e2e_node_suite_test.go:217] Wait for the node to be ready
Oct 26 21:18:57.757: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
SSSSSSSSSSSSSSSS
------------------------------
... skipping 40 lines ...
I1026 21:19:09.738213    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:19:10.739425    1509 server.go:182] Initial health check passed for service "kubelet"
STEP: setting initial state "correct"
I1026 21:19:21.751293    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:19:21.761273    1509 server.go:171] Running health check for service "kubelet"
I1026 21:19:21.761303    1509 util.go:48] Running readiness check for service "kubelet"
STEP: from "correct" to "fail-parse"
I1026 21:19:22.762844    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:19:33.776293    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:19:33.783602    1509 server.go:171] Running health check for service "kubelet"
I1026 21:19:33.783629    1509 util.go:48] Running readiness check for service "kubelet"
STEP: back to "correct" from "fail-parse"
I1026 21:19:34.785747    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:19:45.800359    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:19:45.816963    1509 server.go:171] Running health check for service "kubelet"
I1026 21:19:45.816993    1509 util.go:48] Running readiness check for service "kubelet"
STEP: from "correct" to "fail-validate"
I1026 21:19:46.819153    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:19:57.830990    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:19:57.840073    1509 server.go:171] Running health check for service "kubelet"
I1026 21:19:57.840103    1509 util.go:48] Running readiness check for service "kubelet"
STEP: back to "correct" from "fail-validate"
I1026 21:19:58.841782    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:08.852614    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:08.861461    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:08.861496    1509 util.go:48] Running readiness check for service "kubelet"
STEP: setting initial state "fail-parse"
I1026 21:20:09.863077    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:21.876259    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:21.884592    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:21.884619    1509 util.go:48] Running readiness check for service "kubelet"
STEP: from "fail-parse" to "fail-validate"
I1026 21:20:22.885945    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:33.899444    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:33.908462    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:33.908492    1509 util.go:48] Running readiness check for service "kubelet"
STEP: back to "fail-parse" from "fail-validate"
I1026 21:20:34.910579    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:46.924798    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:46.935622    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:46.935656    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:20:47.937939    1509 server.go:182] Initial health check passed for service "kubelet"
STEP: setting initial state "fail-validate"
I1026 21:20:59.952683    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:59.960610    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:59.960640    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:21:00.961886    1509 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 158 lines ...
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:25:16.531030    1509 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
Oct 26 21:25:16.547: INFO: Get running kubelet with systemctl: UNIT                                                               LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
home-kubernetes-containerized_mounter-rootfs-var-lib-kubelet.mount loaded active mounted /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
kubelet-20211026T211704.service                                    loaded active exited  /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-cos-89-16108-534-17-1e7097e6 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

2 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:25:16.579278    1509 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:25:26.630328    1509 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:25:29.252727    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:25:29.260863    1509 server.go:171] Running health check for service "kubelet"
I1026 21:25:29.260894    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:25:30.262344    1509 server.go:182] Initial health check passed for service "kubelet"
... skipping 878 lines ...
Oct 26 21:33:07.536: INFO: Waiting for pod pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad to disappear
Oct 26 21:33:07.539: INFO: Pod pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad no longer exists
STEP: Waiting for checkpoint to be removed
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad"
Oct 26 21:33:07.547: INFO: Checkpoint of "pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad" still exists: [/var/lib/dockershim/sandbox/e002550ddd1cb011a2743d02e00bd9f76db3ab9e1ec41f21fe9654146b6ac1ea]
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad"
Oct 26 21:33:17.557: INFO: grep from dockershim checkpoint directory returns error: exit status 1
[AfterEach] [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:33:17.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dockerhism-checkpoint-test-245" for this suite.

• [SLOW TEST:16.071 seconds]
... skipping 259 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:44:14.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resource-usage-3082" for this suite.
[AfterEach] [sig-node] Resource-usage [Serial] [Slow]
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:60
W1026 21:44:15.007727    1509 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:44:15.045: INFO: runtime operation error metrics:
node "n1-standard-2-cos-89-16108-534-17-1e7097e6" runtime operation error rate:
operation "remove_container": total - 13; error rate - 0.000000; timeout rate - 0.000000
operation "create_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
operation "inspect_container": total - 238; error rate - 0.033613; timeout rate - 0.000000
operation "start_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
operation "list_images": total - 91; error rate - 0.000000; timeout rate - 0.000000
operation "stop_container": total - 27; error rate - 0.000000; timeout rate - 0.000000
operation "version": total - 194; error rate - 0.000000; timeout rate - 0.000000
operation "list_containers": total - 2507; error rate - 0.000000; timeout rate - 0.000000
operation "info": total - 0; error rate - NaN; timeout rate - NaN
operation "inspect_image": total - 92; error rate - 0.000000; timeout rate - 0.000000



• [SLOW TEST:653.393 seconds]
[sig-node] Resource-usage [Serial] [Slow]
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
... skipping 90 lines ...
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:57:00.475862    1509 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:48670->127.0.0.1:10255: read: connection reset by peer
Oct 26 21:57:00.489: INFO: Get running kubelet with systemctl: UNIT                                                               LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
home-kubernetes-containerized_mounter-rootfs-var-lib-kubelet.mount loaded active mounted /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
kubelet-20211026T211704.service                                    loaded active exited  /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-cos-89-16108-534-17-1e7097e6 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

2 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:57:00.516912    1509 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:57:10.580152    1509 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:57:12.977803    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:57:12.987687    1509 server.go:171] Running health check for service "kubelet"
I1026 21:57:12.987715    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:57:13.988978    1509 server.go:182] Initial health check passed for service "kubelet"
... skipping 171 lines ...
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not generate events for too old log
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not change node condition for too old log
STEP: Inject 1 logs: "permanent error 1"
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should generate event for old log within lookback duration
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 3 temp events generated
STEP: Wait for 3 total events generated
STEP: Make sure only 3 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should change node condition for old log within lookback duration
STEP: Inject 1 logs: "permanent error 1"
STEP: Wait for 3 temp events generated
STEP: Wait for 4 total events generated
STEP: Make sure only 4 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should generate event for new log
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 6 temp events generated
STEP: Wait for 7 total events generated
STEP: Make sure only 7 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not update node condition with the same reason
STEP: Inject 1 logs: "permanent error 1different message"
STEP: Wait for 6 temp events generated
STEP: Wait for 7 total events generated
STEP: Make sure only 7 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should change node condition for new log
STEP: Inject 1 logs: "permanent error 2"
STEP: Wait for 6 temp events generated
STEP: Wait for 8 total events generated
STEP: Make sure only 8 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
[AfterEach] SystemLogMonitor
... skipping 70 lines ...
Oct 26 22:01:21.113: INFO: Skipping waiting for service account
[BeforeEach] Downward API tests for local ephemeral storage
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:38
[It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:42
STEP: Creating a pod to test downward api env vars
Oct 26 22:01:21.118: INFO: Waiting up to 5m0s for pod "downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae" in namespace "downward-api-5715" to be "Succeeded or Failed"
Oct 26 22:01:21.120: INFO: Pod "downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389055ms
Oct 26 22:01:23.123: INFO: Pod "downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005294118s
STEP: Saw pod success
Oct 26 22:01:23.123: INFO: Pod "downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae" satisfied condition "Succeeded or Failed"
Oct 26 22:01:23.125: INFO: Trying to get logs from node n1-standard-2-cos-89-16108-534-17-1e7097e6 pod downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae container dapi-container: <nil>
STEP: delete the pod
Oct 26 22:01:23.135: INFO: Waiting for pod downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae to disappear
Oct 26 22:01:23.136: INFO: Pod downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae no longer exists
[AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 67 lines ...
Oct 26 22:01:27.185: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-two is Pending, waiting for it to be Running (with Ready = true)
Oct 26 22:01:27.186: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-one is Pending, waiting for it to be Running (with Ready = true)
Oct 26 22:01:27.187: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-three is Pending, waiting for it to be Running (with Ready = true)
I1026 22:01:28.274883    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 22:01:28.283602    1509 server.go:171] Running health check for service "kubelet"
I1026 22:01:28.283631    1509 util.go:48] Running readiness check for service "kubelet"
W1026 22:01:28.395387    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.395577    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.395928    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.396037    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.396117    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.396192    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
Oct 26 22:01:29.185: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-three is Pending, waiting for it to be Running (with Ready = true)
Oct 26 22:01:29.186: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-two is Pending, waiting for it to be Running (with Ready = true)
Oct 26 22:01:29.186: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-one is Pending, waiting for it to be Running (with Ready = true)
I1026 22:01:29.285174    1509 server.go:182] Initial health check passed for service "kubelet"
Oct 26 22:01:31.189: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-one is Running (Ready = false)
Oct 26 22:01:31.189: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-three is Running (Ready = false)
... skipping 48 lines ...
---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
I1026 22:02:09.543088    1509 e2e_node_suite_test.go:237] Stopping node services...
I1026 22:02:09.543099    1509 server.go:257] Kill server "services"
I1026 22:02:09.543110    1509 server.go:294] Killing process 2762 (services) with -TERM
E1026 22:02:09.606841    1509 services.go:95] Failed to stop services: error stopping "services": waitid: no child processes
I1026 22:02:09.606875    1509 server.go:257] Kill server "kubelet"
I1026 22:02:09.617184    1509 services.go:156] Fetching log files...
I1026 22:02:09.617267    1509 services.go:165] Get log file "containerd.log" with journalctl command [-u containerd].
I1026 22:02:09.642622    1509 services.go:165] Get log file "containerd-installation.log" with journalctl command [-u containerd-installation].
I1026 22:02:09.646705    1509 services.go:165] Get log file "kern.log" with journalctl command [-k].
I1026 22:02:09.664167    1509 services.go:165] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I1026 22:02:09.953851    1509 services.go:165] Get log file "docker.log" with journalctl command [-u docker].
I1026 22:02:09.964798    1509 e2e_node_suite_test.go:242] Tests Finished

JUnit report was created: /tmp/node-e2e-20211026T211704/results/junit_cos-stable1_01.xml

Ran 13 of 97 Specs in 2700.222 seconds
FAIL! -- 13 Passed | 0 Failed | 1 Pending | 83 Skipped

Ginkgo ran 1 suite in 45m0.424709442s
Test Suite Failed
, err: exit status 124
I1026 22:02:09.975430   12685 remote.go:198] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
I1026 22:02:09.975471   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo sh -c 'journalctl --system --all > /tmp/20211026T220209-system.log']
I1026 22:02:10.243749   12685 remote.go:203] Got the system logs from journald; copying it back...
I1026 22:02:10.243796   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135:/tmp/20211026T220209-system.log /logs/artifacts/a121f945-36a0-11ec-ba9f-a2e2905e9978/n1-standard-2-cos-89-16108-534-17-1e7097e6-system.log]
I1026 22:02:10.390449   12685 remote.go:123] Copying test artifacts from "n1-standard-2-cos-89-16108-534-17-1e7097e6"
I1026 22:02:10.390630   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine -r prow@35.238.54.135:/tmp/node-e2e-20211026T211704/results/*.log /logs/artifacts/a121f945-36a0-11ec-ba9f-a2e2905e9978/n1-standard-2-cos-89-16108-534-17-1e7097e6]
I1026 22:02:10.627104   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo ls /tmp/node-e2e-20211026T211704/results/*.json]
I1026 22:02:10.741228   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine -r prow@35.238.54.135:/tmp/node-e2e-20211026T211704/results/*.json /logs/artifacts/a121f945-36a0-11ec-ba9f-a2e2905e9978]
I1026 22:02:10.856890   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo ls /tmp/node-e2e-20211026T211704/results/junit*]
I1026 22:02:10.973454   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135:/tmp/node-e2e-20211026T211704/results/junit* /logs/artifacts/a121f945-36a0-11ec-ba9f-a2e2905e9978]
I1026 22:02:11.375636   12685 run_remote.go:856] Deleting instance "n1-standard-2-cos-89-16108-534-17-1e7097e6"
E1026 22:02:11.389307   12685 ssh.go:120] failed to run SSH command: out: Flag --logtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
W1026 21:17:11.355949    3009 test_context.go:457] Unable to find in-cluster config, using default host : https://127.0.0.1:6443
I1026 21:17:11.356009    3009 test_context.go:474] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
Oct 26 21:17:11.356: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
STEP: Enabling support for Kubelet Plugins Watcher
I1026 21:17:11.445749    3009 mount_linux.go:222] Detected OS with systemd
I1026 21:17:11.459635    3009 mount_linux.go:222] Detected OS with systemd
... skipping 54 lines ...
Oct 26 21:17:11.645: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
I1026 21:17:11.698442    3009 image_list.go:171] Pre-pulling images with docker [docker.io/nfvpe/sriov-device-plugin:v3.1 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/e2e-test-images/agnhost:2.33 k8s.gcr.io/e2e-test-images/busybox:1.29-2 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 k8s.gcr.io/e2e-test-images/ipc-utils:1.3 k8s.gcr.io/e2e-test-images/nginx:1.14-2 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1 k8s.gcr.io/e2e-test-images/nonewprivs:1.3 k8s.gcr.io/e2e-test-images/nonroot:1.2 k8s.gcr.io/e2e-test-images/perl:5.26 k8s.gcr.io/e2e-test-images/volume/gluster:1.3 k8s.gcr.io/e2e-test-images/volume/nfs:1.3 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa k8s.gcr.io/pause:3.6 k8s.gcr.io/stress:v1 quay.io/kubevirt/device-plugin-kvm]
I1026 21:18:43.076825    3009 server.go:102] Starting server "services" with command "/tmp/node-e2e-20211026T211704/e2e_node.test --run-services-mode --bearer-token=w_TwbbHZDjXBgAEJ --test.timeout=24h0m0s --ginkgo.seed=1635283031 --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\] --ginkgo.slowSpecThreshold=5.00000 --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --report-dir=/tmp/node-e2e-20211026T211704/results --report-prefix=ubuntu --image-description=ubuntu-gke-2004-1-20-v20210401 --kubelet-flags=--kernel-memcg-notification=true --kubelet-flags=--cluster-domain=cluster.local --dns-domain=cluster.local --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags=--cgroups-per-qos=true --cgroup-root=/"
I1026 21:18:43.076898    3009 util.go:48] Running readiness check for service "services"
I1026 21:18:43.076991    3009 server.go:130] Output file for server "services": /tmp/node-e2e-20211026T211704/results/services.log
I1026 21:18:43.077397    3009 server.go:160] Waiting for server "services" start command to complete
W1026 21:18:44.077578    3009 util.go:104] Health check on "https://127.0.0.1:6443/healthz" failed, error=Head "https://127.0.0.1:6443/healthz": dial tcp 127.0.0.1:6443: connect: connection refused
W1026 21:18:47.932074    3009 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
I1026 21:18:48.933800    3009 services.go:70] Node services started.
I1026 21:18:48.933828    3009 kubelet.go:100] Starting kubelet
W1026 21:18:48.933917    3009 feature_gate.go:235] Setting deprecated feature gate DynamicKubeletConfig=true. It will be removed in a future release.
I1026 21:18:48.933932    3009 feature_gate.go:245] feature gates: &{map[DynamicKubeletConfig:true LocalStorageCapacityIsolation:true]}
I1026 21:18:48.935581    3009 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true -p StandardError=file:/tmp/node-e2e-20211026T211704/results/kubelet.log --unit=kubelet-20211026T211704.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/"
I1026 21:18:48.935623    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:18:48.935697    3009 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20211026T211704/results/kubelet.log
I1026 21:18:48.936166    3009 server.go:171] Running health check for service "kubelet"
I1026 21:18:48.936187    3009 util.go:48] Running readiness check for service "kubelet"
W1026 21:18:49.936193    3009 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W1026 21:18:49.936536    3009 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I1026 21:18:50.937594    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:18:50.937655    3009 services.go:80] Kubelet started.
I1026 21:18:50.937669    3009 e2e_node_suite_test.go:217] Wait for the node to be ready
Oct 26 21:19:00.988: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing 
  should complete pod sandbox clean up
... skipping 21 lines ...
Oct 26 21:19:05.118: INFO: Waiting for pod pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454 to disappear
Oct 26 21:19:05.120: INFO: Pod pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454 still exists
Oct 26 21:19:07.120: INFO: Waiting for pod pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454 to disappear
Oct 26 21:19:07.124: INFO: Pod pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454 no longer exists
STEP: Waiting for checkpoint to be removed
STEP: Search checkpoints containing "pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454"
Oct 26 21:19:07.139: INFO: grep from dockershim checkpoint directory returns error: exit status 1
[AfterEach] [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:19:07.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dockerhism-checkpoint-test-1757" for this suite.

• [SLOW TEST:6.093 seconds]
... skipping 37 lines ...
Oct 26 21:19:23.220: INFO: Checkpoint of "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14" still exists: [/var/lib/dockershim/sandbox/9328ec8244e1f00ea26acdd6f738628f7355ea163f4c6d52facd7f9874776e4f]
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14"
Oct 26 21:19:33.221: INFO: Checkpoint of "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14" still exists: [/var/lib/dockershim/sandbox/9328ec8244e1f00ea26acdd6f738628f7355ea163f4c6d52facd7f9874776e4f]
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14"
Oct 26 21:19:43.220: INFO: Checkpoint of "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14" still exists: [/var/lib/dockershim/sandbox/9328ec8244e1f00ea26acdd6f738628f7355ea163f4c6d52facd7f9874776e4f]
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14"
Oct 26 21:19:53.221: INFO: grep from dockershim checkpoint directory returns error: exit status 1
[AfterEach] [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:19:53.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dockerhism-checkpoint-test-9196" for this suite.

• [SLOW TEST:46.086 seconds]
... skipping 22 lines ...
I1026 21:19:57.017182    3009 util.go:48] Running readiness check for service "kubelet"
STEP: setting initial state "correct"
I1026 21:19:58.019279    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:09.032065    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:09.047451    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:09.047482    3009 util.go:48] Running readiness check for service "kubelet"
STEP: from "correct" to "fail-parse"
I1026 21:20:10.049845    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:21.062670    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:21.073766    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:21.073797    3009 util.go:48] Running readiness check for service "kubelet"
STEP: back to "correct" from "fail-parse"
I1026 21:20:22.075719    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:33.089026    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:33.100371    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:33.100397    3009 util.go:48] Running readiness check for service "kubelet"
STEP: from "correct" to "fail-validate"
I1026 21:20:34.102145    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:45.115911    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:45.124916    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:45.124944    3009 util.go:48] Running readiness check for service "kubelet"
STEP: back to "correct" from "fail-validate"
I1026 21:20:46.127143    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:56.138279    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:56.152458    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:56.152490    3009 util.go:48] Running readiness check for service "kubelet"
STEP: setting initial state "fail-parse"
I1026 21:20:57.153812    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:21:07.172474    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:21:07.193181    3009 server.go:171] Running health check for service "kubelet"
I1026 21:21:07.193227    3009 util.go:48] Running readiness check for service "kubelet"
STEP: from "fail-parse" to "fail-validate"
I1026 21:21:08.194678    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:21:18.205628    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:21:18.220992    3009 server.go:171] Running health check for service "kubelet"
I1026 21:21:18.221026    3009 util.go:48] Running readiness check for service "kubelet"
STEP: back to "fail-parse" from "fail-validate"
I1026 21:21:19.223034    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:21:30.236247    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:21:30.251851    3009 server.go:171] Running health check for service "kubelet"
I1026 21:21:30.251893    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:21:31.252676    3009 server.go:182] Initial health check passed for service "kubelet"
STEP: setting initial state "fail-validate"
I1026 21:21:42.262644    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:21:42.301313    3009 server.go:171] Running health check for service "kubelet"
I1026 21:21:42.301347    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:21:43.302402    3009 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 26 lines ...
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename device-plugin-gpus-errors
Oct 26 21:21:55.485: INFO: Skipping waiting for service account
[BeforeEach] DevicePlugin
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:68
STEP: Ensuring that Nvidia GPUs exists on the node
Oct 26 21:21:55.495: INFO: check for nvidia GPUs failed. Got Error: exit status 1
[AfterEach] DevicePlugin
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:91
[AfterEach] [sig-node] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:21:55.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "device-plugin-gpus-errors-812" for this suite.
... skipping 111 lines ...
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed.
, kubelet-20211026T211704
W1026 21:25:46.684101    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:56780->127.0.0.1:10255: read: connection reset by peer
Oct 26 21:25:46.702: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
  kubelet-20211026T211704.service loaded active running /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:25:46.746115    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:25:56.799986    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:25:58.593961    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:25:58.603452    3009 server.go:171] Running health check for service "kubelet"
I1026 21:25:58.603483    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:25:59.605574    3009 server.go:182] Initial health check passed for service "kubelet"
... skipping 93 lines ...
    keeps GPU assignation to pods after the device plugin has been removed.
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:119
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system 
  should return the expected error with the feature gate disabled
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:681
[BeforeEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename podresources-test
Oct 26 21:27:48.076: INFO: Skipping waiting for service account
[It] should return the expected error with the feature gate disabled
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:681
Oct 26 21:27:48.076: INFO: Only supported when KubeletPodResourcesGetAllocatable feature is disabled
[AfterEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:27:48.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podresources-test-2828" for this suite.

S [SKIPPING] [0.012 seconds]
[sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
  Without SRIOV devices in the system
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:620
    should return the expected error with the feature gate disabled [It]
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:681

    Only supported when KubeletPodResourcesGetAllocatable feature is disabled

    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:682
------------------------------
... skipping 380 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:40:22.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resource-usage-5948" for this suite.
[AfterEach] [sig-node] Resource-usage [Serial] [Slow]
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:60
W1026 21:40:22.811579    3009 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:40:22.836: INFO: runtime operation error metrics:
node "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5" runtime operation error rate:
operation "remove_container": total - 23; error rate - 0.000000; timeout rate - 0.000000
operation "start_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
operation "inspect_container": total - 246; error rate - 0.004065; timeout rate - 0.000000
operation "info": total - 0; error rate - NaN; timeout rate - NaN
operation "stop_container": total - 38; error rate - 0.000000; timeout rate - 0.000000
operation "version": total - 195; error rate - 0.000000; timeout rate - 0.000000
operation "inspect_image": total - 95; error rate - 0.000000; timeout rate - 0.000000
operation "list_containers": total - 2521; error rate - 0.000000; timeout rate - 0.000000
operation "list_images": total - 89; error rate - 0.000000; timeout rate - 0.000000
operation "create_container": total - 22; error rate - 0.000000; timeout rate - 0.000000



• [SLOW TEST:651.314 seconds]
[sig-node] Resource-usage [Serial] [Slow]
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
... skipping 33 lines ...
STEP: Building a namespace api object, basename topology-manager-test
Oct 26 21:40:26.891: INFO: Skipping waiting for service account
[It] run Topology Manager policy test suite
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:870
STEP: by configuring Topology Manager policy to single-numa-node
Oct 26 21:40:26.897: INFO: Configuring topology Manager policy to single-numa-node
Oct 26 21:40:26.900: INFO: failed to find any VF device from [{0000:00:00.0 -1 false false} {0000:00:01.0 -1 false false} {0000:00:01.3 -1 false false} {0000:00:03.0 -1 false false} {0000:00:04.0 -1 false false} {0000:00:05.0 -1 false false}]
Oct 26 21:40:26.900: INFO: New kubelet config is {{ } %!s(bool=true) /tmp/node-e2e-20211026T211704/static-pods570862951 {1m0s} {10s} {20s}  map[] 0.0.0.0 %!s(int32=10250) %!s(int32=10255) /usr/libexec/kubernetes/kubelet-plugins/volume/exec/  /var/lib/kubelet/pki/kubelet.crt /var/lib/kubelet/pki/kubelet.key []  %!s(bool=false) %!s(bool=false) {{} {%!s(bool=false) {2m0s}} {%!s(bool=true)}} {AlwaysAllow {{5m0s} {30s}}} %!s(int32=5) %!s(int32=10) %!s(int32=5) %!s(int32=10) %!s(bool=true) %!s(bool=false) %!s(int32=10248) 127.0.0.1 %!s(int32=-999) cluster.local [] {4h0m0s} {10s} {5m0s} %!s(int32=40) {2m0s} %!s(int32=85) %!s(int32=80) {10s} /kubelet.slice  / %!s(bool=true) cgroupfs static map[] {1s} None single-numa-node container map[] {2m0s} promiscuous-bridge %!s(int32=110) 10.100.0.0/24 %!s(int64=-1) /etc/resolv.conf %!s(bool=false) %!s(bool=true) {100ms} %!s(int64=1000000) %!s(int32=50) application/vnd.kubernetes.protobuf %!s(int32=5) %!s(int32=10) %!s(bool=false) map[memory.available:250Mi nodefs.available:10% nodefs.inodesFree:5%] map[] map[] {30s} %!s(int32=0) map[nodefs.available:5% nodefs.inodesFree:5%] %!s(int32=0) %!s(bool=true) %!s(bool=false) %!s(bool=true) %!s(int32=14) %!s(int32=15) map[CPUManager:%!s(bool=true) DynamicKubeletConfig:%!s(bool=true) LocalStorageCapacityIsolation:%!s(bool=true) TopologyManager:%!s(bool=true)] %!s(bool=true) {} 10Mi %!s(int32=5) Watch [] %!s(bool=true) map[] map[cpu:200m]   [pods]   {text %!s(bool=false) {{%!s(bool=false) {{{%!s(int64=0) %!s(resource.Scale=0)} {%!s(*inf.Dec=<nil>)} 0 DecimalSI}}}}} %!s(bool=true) {0s} {0s} [] %!s(bool=true) %!s(bool=true) %!s(bool=false) %!s(*float64=0xc000b7d6a8)}
W1026 21:40:26.920466    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:40:30.472684    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:40:30.493381    3009 server.go:171] Running health check for service "kubelet"
I1026 21:40:30.493424    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:40:31.495109    3009 server.go:182] Initial health check passed for service "kubelet"
... skipping 35 lines ...
I1026 21:42:17.360727    3009 remote_runtime.go:54] "Connecting to runtime service" endpoint="unix:///var/run/dockershim.sock"
I1026 21:42:17.360872    3009 remote_image.go:41] "Connecting to image service" endpoint="unix:///var/run/dockershim.sock"
Oct 26 21:42:18.365: INFO: Skipping rest of the CPU Manager tests since CPU capacity < 3
[AfterEach] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:925
W1026 21:42:18.390375    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
W1026 21:42:22.908909    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:42:22.909466    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:42:22.910016    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:42:22.910016    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
Oct 26 21:42:23.413: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b2111dc-a858-45f8-9706-e35a8cd8a1f6] Cache-Control:[no-cache, private] Content-Length:[208] Content-Type:[application/json] Date:[Tue, 26 Oct 2021 21:42:23 GMT]] Body:0xc001468340 ContentLength:208 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001038000 TLS:0xc000d980b0}
I1026 21:42:23.639668    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:42:23.651610    3009 server.go:171] Running health check for service "kubelet"
I1026 21:42:23.651644    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:42:24.653067    3009 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager]
... skipping 174 lines ...
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not generate events for too old log
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not change node condition for too old log
STEP: Inject 1 logs: "permanent error 1"
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should generate event for old log within lookback duration
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 3 temp events generated
STEP: Wait for 3 total events generated
STEP: Make sure only 3 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should change node condition for old log within lookback duration
STEP: Inject 1 logs: "permanent error 1"
STEP: Wait for 3 temp events generated
STEP: Wait for 4 total events generated
STEP: Make sure only 4 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should generate event for new log
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 6 temp events generated
STEP: Wait for 7 total events generated
STEP: Make sure only 7 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not update node condition with the same reason
STEP: Inject 1 logs: "permanent error 1different message"
STEP: Wait for 6 temp events generated
STEP: Wait for 7 total events generated
STEP: Make sure only 7 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should change node condition for new log
STEP: Inject 1 logs: "permanent error 2"
STEP: Wait for 6 temp events generated
STEP: Wait for 8 total events generated
STEP: Make sure only 8 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
[AfterEach] SystemLogMonitor
... skipping 89 lines ...
Oct 26 21:44:40.508: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:44:40.508: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 0
Oct 26 21:44:40.508: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:44:40.508: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:44:40.510: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:44:40.510: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
STEP: making sure pressure from test has surfaced before continuing
STEP: Waiting for NodeCondition: NoPressure to no longer exist on the node
Oct 26 21:45:00.531: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:00.531: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:00.531: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:00.531: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
... skipping 11 lines ...
Oct 26 21:45:00.567: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:00.567: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:00.567: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:00.581: INFO: Kubelet Metrics: []
Oct 26 21:45:00.585: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:00.585: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:02.601: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:02.601: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:02.601: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:02.601: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:02.601: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:02.601: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:02.601: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:02.601: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:02.615: INFO: Kubelet Metrics: []
Oct 26 21:45:02.620: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:02.621: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:04.636: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:04.636: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:04.636: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:04.636: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:04.636: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:04.636: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:04.636: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:04.636: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:04.651: INFO: Kubelet Metrics: []
Oct 26 21:45:04.655: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:04.655: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:06.677: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:06.677: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:06.677: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:06.677: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:06.677: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:06.677: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:06.677: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:06.677: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:06.701: INFO: Kubelet Metrics: []
Oct 26 21:45:06.704: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:06.704: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:08.719: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:08.719: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:08.719: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:08.719: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:08.719: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:08.719: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:08.719: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:08.719: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:08.733: INFO: Kubelet Metrics: []
Oct 26 21:45:08.736: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:08.736: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:10.754: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:10.754: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:10.754: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:10.754: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:10.754: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:10.754: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:10.754: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:10.754: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:10.765: INFO: Kubelet Metrics: []
Oct 26 21:45:10.770: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:10.770: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:12.790: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:12.790: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:12.790: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:12.790: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:12.790: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:12.790: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:12.790: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:12.790: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:12.804: INFO: Kubelet Metrics: []
Oct 26 21:45:12.808: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:12.808: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:14.825: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:14.825: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:14.825: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:14.825: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:14.825: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:14.825: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:14.825: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:14.825: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:14.848: INFO: Kubelet Metrics: []
Oct 26 21:45:14.853: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:14.853: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:16.872: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:16.872: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:16.872: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:16.872: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:16.872: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:16.872: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:16.872: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:16.872: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:16.891: INFO: Kubelet Metrics: []
Oct 26 21:45:16.894: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:16.894: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:18.915: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:18.915: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:18.915: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:18.915: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:18.915: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:18.915: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:18.915: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:18.915: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:18.929: INFO: Kubelet Metrics: []
Oct 26 21:45:18.933: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:18.933: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:20.953: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:20.953: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:20.953: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:20.953: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:20.953: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:20.953: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:20.953: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:20.953: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:20.975: INFO: Kubelet Metrics: []
Oct 26 21:45:20.979: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:20.979: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:22.992: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:22.992: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:22.993: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:22.993: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:22.993: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:22.993: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:22.993: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:22.993: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:23.021: INFO: Kubelet Metrics: []
Oct 26 21:45:23.027: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:23.027: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:25.050: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:25.050: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:25.051: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:25.051: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:25.051: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:25.051: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:25.051: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:25.051: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:25.065: INFO: Kubelet Metrics: []
Oct 26 21:45:25.071: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:25.071: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:27.088: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:27.089: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:27.089: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:27.089: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:27.089: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:27.089: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:27.089: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:27.089: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:27.103: INFO: Kubelet Metrics: []
Oct 26 21:45:27.107: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:27.107: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:29.123: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:29.123: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:29.123: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:29.123: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:29.123: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:29.123: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:29.123: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:29.123: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:29.136: INFO: Kubelet Metrics: []
Oct 26 21:45:29.139: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:29.139: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:31.160: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:31.160: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:31.160: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:31.160: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:31.160: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:31.160: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:31.160: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:31.160: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:31.183: INFO: Kubelet Metrics: []
Oct 26 21:45:31.187: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:31.187: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:33.206: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:33.206: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:33.206: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:33.206: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:33.206: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:33.206: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:33.206: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:33.206: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:33.230: INFO: Kubelet Metrics: []
Oct 26 21:45:33.233: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:33.233: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:35.250: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:35.250: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:35.250: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:35.250: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:35.250: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:35.250: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:35.250: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:35.250: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:35.264: INFO: Kubelet Metrics: []
Oct 26 21:45:35.270: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:35.270: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:37.289: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:37.290: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:37.290: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:37.290: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:37.290: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:37.290: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:37.290: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:37.290: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:37.304: INFO: Kubelet Metrics: []
Oct 26 21:45:37.307: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:37.307: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:39.325: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:39.325: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:39.325: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:39.325: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:39.325: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:39.325: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:39.325: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:39.325: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:39.348: INFO: Kubelet Metrics: []
Oct 26 21:45:39.351: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:39.351: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:41.364: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:41.364: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:41.364: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:41.364: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:41.364: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:41.364: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:41.364: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:41.364: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:41.379: INFO: Kubelet Metrics: []
Oct 26 21:45:41.383: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:41.383: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:43.403: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:43.403: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:43.403: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:43.403: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:43.403: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:43.403: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:43.403: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:43.403: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:43.416: INFO: Kubelet Metrics: []
Oct 26 21:45:43.422: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:43.422: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:45.439: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:45.439: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:45.439: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:45.439: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:45.439: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:45.439: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:45.439: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:45.439: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:45.450: INFO: Kubelet Metrics: []
Oct 26 21:45:45.454: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:45.454: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:47.471: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:47.471: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:47.471: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:47.471: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:47.471: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:47.471: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:47.471: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:47.471: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:47.498: INFO: Kubelet Metrics: []
Oct 26 21:45:47.501: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:47.501: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:49.533: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:49.533: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:49.533: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:49.533: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:49.533: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:49.533: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:49.533: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:49.533: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:49.546: INFO: Kubelet Metrics: []
Oct 26 21:45:49.549: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:49.549: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:51.567: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:51.567: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:51.567: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:51.567: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:51.567: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:51.567: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:51.567: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:51.567: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:51.581: INFO: Kubelet Metrics: []
Oct 26 21:45:51.586: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:51.586: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:53.601: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:53.601: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:53.601: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:53.601: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:53.601: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:53.601: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:53.601: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:53.601: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:53.614: INFO: Kubelet Metrics: []
Oct 26 21:45:53.618: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:53.618: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:55.639: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:55.639: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:55.639: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:55.639: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:55.639: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:55.639: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:55.639: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:55.639: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:55.668: INFO: Kubelet Metrics: []
Oct 26 21:45:55.674: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:55.674: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:57.687: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:57.687: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:57.687: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:57.687: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:57.687: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:57.687: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:57.687: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:57.687: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:57.700: INFO: Kubelet Metrics: []
Oct 26 21:45:57.703: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:57.703: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:59.719: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:59.719: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:59.719: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:59.719: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:59.719: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:59.719: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:59.719: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:59.719: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:59.733: INFO: Kubelet Metrics: []
Oct 26 21:45:59.737: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:59.737: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
STEP: checking for correctly formatted eviction events
[AfterEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:579
STEP: deleting pods
STEP: deleting pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:46:00.560: INFO: Waiting for pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod to disappear
... skipping 115 lines ...
[It] should set pids.max for Pod
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/pids_test.go:89
STEP: by creating a G pod
I1026 21:47:27.293468    3009 util.go:247] new configuration has taken effect
STEP: checking if the expected pids settings were applied
Oct 26 21:47:27.302: INFO: Pod to run command: expected=1024; actual=$(cat /tmp/pids//kubepods/podc9fe7d52-37d4-4279-8aa8-0335873e10a7/pids.max); if [ "$expected" -ne "$actual" ]; then exit 1; fi; 
Oct 26 21:47:27.306: INFO: Waiting up to 5m0s for pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b" in namespace "pids-limit-test-7896" to be "Succeeded or Failed"
Oct 26 21:47:27.309: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054461ms
Oct 26 21:47:29.313: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006952893s
Oct 26 21:47:31.318: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012556458s
Oct 26 21:47:33.325: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019780426s
Oct 26 21:47:35.330: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.024344927s
STEP: Saw pod success
Oct 26 21:47:35.330: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b" satisfied condition "Succeeded or Failed"
[AfterEach] With config updated with pids limits
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:175
W1026 21:47:35.350435    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:47:44.068410    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:47:44.100463    3009 server.go:171] Running health check for service "kubelet"
I1026 21:47:44.100487    3009 util.go:48] Running readiness check for service "kubelet"
... skipping 24 lines ...
Oct 26 21:47:45.393: INFO: Skipping waiting for service account
[BeforeEach] Downward API tests for local ephemeral storage
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:38
[It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:42
STEP: Creating a pod to test downward api env vars
Oct 26 21:47:45.397: INFO: Waiting up to 5m0s for pod "downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d" in namespace "downward-api-8077" to be "Succeeded or Failed"
Oct 26 21:47:45.400: INFO: Pod "downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206113ms
Oct 26 21:47:47.403: INFO: Pod "downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005860821s
STEP: Saw pod success
Oct 26 21:47:47.403: INFO: Pod "downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d" satisfied condition "Succeeded or Failed"
Oct 26 21:47:47.406: INFO: Trying to get logs from node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 pod downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d container dapi-container: <nil>
STEP: delete the pod
Oct 26 21:47:47.417: INFO: Waiting for pod downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d to disappear
Oct 26 21:47:47.419: INFO: Pod downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d no longer exists
[AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 54 lines ...
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed.
, kubelet-20211026T211704
W1026 21:47:47.584686    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:57242->127.0.0.1:10255: read: connection reset by peer
Oct 26 21:47:47.613: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB    DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
  kubelet-20211026T211704.service loaded active exited /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:47:47.663560    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:47:57.722237    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:47:59.117182    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:47:59.134751    3009 server.go:171] Running health check for service "kubelet"
I1026 21:47:59.134787    3009 util.go:48] Running readiness check for service "kubelet"
W1026 21:47:59.246503    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246503    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246585    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246662    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246689    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246764    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246772    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246860    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
I1026 21:48:00.136026    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:48:02.749836    3009 util.go:247] new configuration has taken effect
[It] should succeed to start the pod
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_manager_test.go:758
Oct 26 21:48:02.768: INFO: The status of Pod memory-manager-nonehc2jx is Pending, waiting for it to be Running (with Ready = true)
Oct 26 21:48:04.772: INFO: The status of Pod memory-manager-nonehc2jx is Running (Ready = true)
... skipping 64 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename dynamic-kubelet-configuration-test
Oct 26 21:49:43.862: INFO: Skipping waiting for service account
[BeforeEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:82
W1026 21:49:48.610472    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
I1026 21:49:49.256192    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:49:49.268736    3009 server.go:171] Running health check for service "kubelet"
I1026 21:49:49.268770    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:49:50.270726    3009 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 2 lines ...
STEP: Collecting events from namespace "dynamic-kubelet-configuration-test-4673".
STEP: Found 0 events.
Oct 26 21:51:49.916: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 26 21:51:49.916: INFO: 
Oct 26 21:51:49.918: INFO: 
Logging node info for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:51:49.919: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5    273ccda5-865f-4ac4-bc03-6ac94fb12171 1506 0 2021-10-26 21:18:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{Go-http-client Update v1 2021-10-26 21:18:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {Go-http-client Update v1 2021-10-26 21:42:36 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:hugepages-2Mi":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7817089024 0} {<nil>} 7633876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7554945024 0} {<nil>} 7377876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-26 21:49:49 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-26 21:49:49 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-26 21:49:49 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-26 21:49:49 +0000 UTC,LastTransitionTime:2021-10-26 21:20:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4ad02c548d37340b4ddca55175c0d8bc,SystemUUID:4ad02c54-8d37-340b-4ddc-a55175c0d8bc,BootID:68e32436-f588-4599-84e3-955d79b00fcc,KernelVersion:5.4.0-1039-gke,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:docker://19.3.8,KubeletVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,KubeProxyVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1631162940,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:853285759,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:033a12fe65438751690b519cebd4135a3485771086bcf437212b7b886bb7956c k8s.gcr.io/e2e-test-images/volume/gluster:1.3],SizeBytes:340331177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e k8s.gcr.io/e2e-test-images/volume/nfs:1.3],SizeBytes:263886631,},ContainerImage{Names:[quay.io/kubevirt/device-plugin-kvm@sha256:b44bc0fd6ff8987091bbc7ec630e5ee6683be40d151b4e6635e24afb5807b21a quay.io/kubevirt/device-plugin-kvm:latest],SizeBytes:249864259,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43 k8s.gcr.io/e2e-test-images/agnhost:2.33],SizeBytes:124737480,},ContainerImage{Names:[debian@sha256:4d6ab716de467aad58e91b1b720f0badd7478847ec7a18f66027d0f8a329a43c debian:latest],SizeBytes:123864999,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector/node-problem-detector@sha256:0ce71ef6d759425d22b10e65b439749fe5d13377a188e2fc060b731cdb4e6901 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7],SizeBytes:113172715,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:8285539c79625b192a5e33fc3d21edc1a7776fb9afe15fae3b5037a7a8020839 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2],SizeBytes:96399029,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:90b5cfc5451428aad4dd6af9960640f2506804d35aa05e83c11bf0a46ac318c8 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2],SizeBytes:96397229,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb nfvpe/sriov-device-plugin:v3.1],SizeBytes:25318421,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:647d092bada3b46c449d875adf31d71c1dd29c244e9cca6a04fddf9d6bcac136 k8s.gcr.io/e2e-test-images/ipc-utils:1.3],SizeBytes:10039660,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:682696,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 26 21:51:49.920: INFO: 
Logging kubelet events for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:51:49.921: INFO: 
Logging pods the kubelet thinks is on node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
W1026 21:51:49.940757    3009 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:51:49.964: INFO: 
... skipping 163 lines ...
Oct 26 21:54:04.022: INFO: Skipping waiting for service account
[BeforeEach] Downward API tests for local ephemeral storage
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:38
[It] should provide default limits.ephemeral-storage from node allocatable
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:70
STEP: Creating a pod to test downward api env vars
Oct 26 21:54:04.028: INFO: Waiting up to 5m0s for pod "downward-api-bf664ee4-7d31-45c1-a634-fb328654c779" in namespace "downward-api-1044" to be "Succeeded or Failed"
Oct 26 21:54:04.030: INFO: Pod "downward-api-bf664ee4-7d31-45c1-a634-fb328654c779": Phase="Pending", Reason="", readiness=false. Elapsed: 1.587105ms
Oct 26 21:54:06.033: INFO: Pod "downward-api-bf664ee4-7d31-45c1-a634-fb328654c779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004711237s
STEP: Saw pod success
Oct 26 21:54:06.033: INFO: Pod "downward-api-bf664ee4-7d31-45c1-a634-fb328654c779" satisfied condition "Succeeded or Failed"
Oct 26 21:54:06.034: INFO: Trying to get logs from node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 pod downward-api-bf664ee4-7d31-45c1-a634-fb328654c779 container dapi-container: <nil>
STEP: delete the pod
Oct 26 21:54:06.057: INFO: Waiting for pod downward-api-bf664ee4-7d31-45c1-a634-fb328654c779 to disappear
Oct 26 21:54:06.058: INFO: Pod downward-api-bf664ee4-7d31-45c1-a634-fb328654c779 no longer exists
[AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 86 lines ...
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_manager_test.go:400
Oct 26 21:54:22.144: INFO: Waiting for pod memory-manager-nonehc2jx to disappear
Oct 26 21:54:22.146: INFO: Pod memory-manager-nonehc2jx no longer exists
Oct 26 21:54:22.177: INFO: Hugepages total is set to 0
W1026 21:54:22.200438    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:54:22.207509    3009 util.go:247] new configuration has taken effect
W1026 21:54:31.748034    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.212154    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
[AfterEach] [sig-node] Memory Manager [Serial] [Feature:MemoryManager]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:54:32.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "memory-manager-test-5478" for this suite.

S [SKIPPING] in Spec Setup (BeforeEach) [10.081 seconds]
... skipping 41 lines ...
    Skipping ContainerLogRotation test since the container runtime is not remote

    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_log_rotation_test.go:48
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  delete and recreate ConfigMap: error while ConfigMap is absent: 
  status and events should match expectations
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:784
[BeforeEach] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename dynamic-kubelet-configuration-test
Oct 26 21:54:32.230: INFO: Skipping waiting for service account
[BeforeEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:82
Oct 26 21:54:32.245: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69adaf09-d481-4eda-8f60-b7c1da3c3164] Cache-Control:[no-cache, private] Content-Length:[208] Content-Type:[application/json] Date:[Tue, 26 Oct 2021 21:54:32 GMT]] Body:0xc00119fdc0 ContentLength:208 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5b00 TLS:0xc000d998c0}
I1026 21:54:32.578890    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:54:32.589546    3009 server.go:171] Running health check for service "kubelet"
I1026 21:54:32.589578    3009 util.go:48] Running readiness check for service "kubelet"
W1026 21:54:32.749261    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749261    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749340    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749416    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749419    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749490    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749534    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749596    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749624    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749701    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749712    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749795    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749795    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749864    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
I1026 21:54:33.590699    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:54:44.603875    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:54:44.612947    3009 server.go:171] Running health check for service "kubelet"
I1026 21:54:44.612972    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:54:45.614010    3009 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] 
... skipping 3 lines ...
STEP: Collecting events from namespace "dynamic-kubelet-configuration-test-7183".
STEP: Found 0 events.
Oct 26 21:56:45.351: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 26 21:56:45.351: INFO: 
Oct 26 21:56:45.353: INFO: 
Logging node info for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:56:45.355: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5    273ccda5-865f-4ac4-bc03-6ac94fb12171 1692 0 2021-10-26 21:18:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{Go-http-client Update v1 2021-10-26 21:18:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {Go-http-client Update v1 2021-10-26 21:54:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:hugepages-2Mi":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7817089024 0} {<nil>} 7633876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7554945024 0} {<nil>} 7377876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:20:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4ad02c548d37340b4ddca55175c0d8bc,SystemUUID:4ad02c54-8d37-340b-4ddc-a55175c0d8bc,BootID:68e32436-f588-4599-84e3-955d79b00fcc,KernelVersion:5.4.0-1039-gke,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:docker://19.3.8,KubeletVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,KubeProxyVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1631162940,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:853285759,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:033a12fe65438751690b519cebd4135a3485771086bcf437212b7b886bb7956c k8s.gcr.io/e2e-test-images/volume/gluster:1.3],SizeBytes:340331177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e k8s.gcr.io/e2e-test-images/volume/nfs:1.3],SizeBytes:263886631,},ContainerImage{Names:[quay.io/kubevirt/device-plugin-kvm@sha256:b44bc0fd6ff8987091bbc7ec630e5ee6683be40d151b4e6635e24afb5807b21a quay.io/kubevirt/device-plugin-kvm:latest],SizeBytes:249864259,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43 k8s.gcr.io/e2e-test-images/agnhost:2.33],SizeBytes:124737480,},ContainerImage{Names:[debian@sha256:4d6ab716de467aad58e91b1b720f0badd7478847ec7a18f66027d0f8a329a43c debian:latest],SizeBytes:123864999,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector/node-problem-detector@sha256:0ce71ef6d759425d22b10e65b439749fe5d13377a188e2fc060b731cdb4e6901 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7],SizeBytes:113172715,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:8285539c79625b192a5e33fc3d21edc1a7776fb9afe15fae3b5037a7a8020839 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2],SizeBytes:96399029,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:90b5cfc5451428aad4dd6af9960640f2506804d35aa05e83c11bf0a46ac318c8 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2],SizeBytes:96397229,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb nfvpe/sriov-device-plugin:v3.1],SizeBytes:25318421,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:647d092bada3b46c449d875adf31d71c1dd29c244e9cca6a04fddf9d6bcac136 k8s.gcr.io/e2e-test-images/ipc-utils:1.3],SizeBytes:10039660,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:682696,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 26 21:56:45.356: INFO: 
Logging kubelet events for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:56:45.357: INFO: 
Logging pods the kubelet thinks is on node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
W1026 21:56:45.376742    3009 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:56:45.389: INFO: 
... skipping 3 lines ...

• Failure in Spec Setup (BeforeEach) [133.170 seconds]
[sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
  
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:81
    delete and recreate ConfigMap: error while ConfigMap is absent: [BeforeEach]
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:783
      status and events should match expectations
      _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:784

      Timed out after 60.000s.
      Expected
... skipping 51 lines ...
STEP: Collecting events from namespace "dynamic-kubelet-configuration-test-1151".
STEP: Found 0 events.
Oct 26 21:58:45.455: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 26 21:58:45.455: INFO: 
Oct 26 21:58:45.457: INFO: 
Logging node info for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:58:45.458: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5    273ccda5-865f-4ac4-bc03-6ac94fb12171 1692 0 2021-10-26 21:18:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{Go-http-client Update v1 2021-10-26 21:18:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {Go-http-client Update v1 2021-10-26 21:54:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:hugepages-2Mi":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7817089024 0} {<nil>} 7633876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7554945024 0} {<nil>} 7377876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:20:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4ad02c548d37340b4ddca55175c0d8bc,SystemUUID:4ad02c54-8d37-340b-4ddc-a55175c0d8bc,BootID:68e32436-f588-4599-84e3-955d79b00fcc,KernelVersion:5.4.0-1039-gke,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:docker://19.3.8,KubeletVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,KubeProxyVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1631162940,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:853285759,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:033a12fe65438751690b519cebd4135a3485771086bcf437212b7b886bb7956c k8s.gcr.io/e2e-test-images/volume/gluster:1.3],SizeBytes:340331177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e k8s.gcr.io/e2e-test-images/volume/nfs:1.3],SizeBytes:263886631,},ContainerImage{Names:[quay.io/kubevirt/device-plugin-kvm@sha256:b44bc0fd6ff8987091bbc7ec630e5ee6683be40d151b4e6635e24afb5807b21a quay.io/kubevirt/device-plugin-kvm:latest],SizeBytes:249864259,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43 k8s.gcr.io/e2e-test-images/agnhost:2.33],SizeBytes:124737480,},ContainerImage{Names:[debian@sha256:4d6ab716de467aad58e91b1b720f0badd7478847ec7a18f66027d0f8a329a43c debian:latest],SizeBytes:123864999,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector/node-problem-detector@sha256:0ce71ef6d759425d22b10e65b439749fe5d13377a188e2fc060b731cdb4e6901 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7],SizeBytes:113172715,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:8285539c79625b192a5e33fc3d21edc1a7776fb9afe15fae3b5037a7a8020839 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2],SizeBytes:96399029,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:90b5cfc5451428aad4dd6af9960640f2506804d35aa05e83c11bf0a46ac318c8 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2],SizeBytes:96397229,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb nfvpe/sriov-device-plugin:v3.1],SizeBytes:25318421,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:647d092bada3b46c449d875adf31d71c1dd29c244e9cca6a04fddf9d6bcac136 k8s.gcr.io/e2e-test-images/ipc-utils:1.3],SizeBytes:10039660,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:682696,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 26 21:58:45.459: INFO: 
Logging kubelet events for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:58:45.460: INFO: 
Logging pods the kubelet thinks is on node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
W1026 21:58:45.476535    3009 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:58:45.492: INFO: 
... skipping 47 lines ...
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed.
, kubelet-20211026T211704
W1026 21:58:45.628223    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:57450->127.0.0.1:10255: read: connection reset by peer
Oct 26 21:58:45.647: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
  kubelet-20211026T211704.service loaded active running /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:58:45.696160    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I1026 21:58:45.865578    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:58:45.984936    3009 server.go:171] Running health check for service "kubelet"
I1026 21:58:45.984964    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:58:46.987242    3009 server.go:182] Initial health check passed for service "kubelet"
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:58:55.755194    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
... skipping 135 lines ...
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed.
, kubelet-20211026T211704
W1026 22:01:21.052368    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:57558->127.0.0.1:10255: read: connection reset by peer
Oct 26 22:01:21.070: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
  kubelet-20211026T211704.service loaded active running /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 22:01:21.072304    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.072953    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.073061    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.073218    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.073318    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.073949    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074069    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074137    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074176    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074228    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074263    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074339    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074353    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074074    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.127449    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I1026 22:01:21.166007    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 22:01:21.178193    3009 server.go:171] Running health check for service "kubelet"
I1026 22:01:21.178226    3009 util.go:48] Running readiness check for service "kubelet"
I1026 22:01:22.180327    3009 server.go:182] Initial health check passed for service "kubelet"
STEP: Waiting for hugepages resource to become available on the local node
W1026 22:01:31.176082    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
... skipping 17 lines ...
---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
I1026 22:02:11.267854    3009 e2e_node_suite_test.go:237] Stopping node services...
I1026 22:02:11.267868    3009 server.go:257] Kill server "services"
I1026 22:02:11.267882    3009 server.go:294] Killing process 4245 (services) with -TERM
E1026 22:02:11.317045    3009 services.go:95] Failed to stop services: error stopping "services": waitid: no child processes
I1026 22:02:11.317073    3009 server.go:257] Kill server "kubelet"
I1026 22:02:11.326972    3009 services.go:156] Fetching log files...
I1026 22:02:11.327055    3009 services.go:165] Get log file "kern.log" with journalctl command [-k].
I1026 22:02:11.341165    3009 services.go:165] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I1026 22:02:11.351280    3009 services.go:165] Get log file "docker.log" with journalctl command [-u docker].
I1026 22:02:11.359097    3009 services.go:165] Get log file "containerd.log" with journalctl command [-u containerd].
... skipping 2 lines ...

JUnit report was created: /tmp/node-e2e-20211026T211704/results/junit_ubuntu_01.xml


Summarizing 3 Failures:

[Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] update ConfigMap in-place: state transitions: status and events should match expectations 
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192

[Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations 
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192

[Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations 
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192

Ran 24 of 215 Specs in 2699.905 seconds
FAIL! -- 21 Passed | 3 Failed | 1 Pending | 190 Skipped

Ginkgo ran 1 suite in 45m0.115607624s
Test Suite Failed
, err: exit status 124
I1026 22:02:11.390588   12685 remote.go:198] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
I1026 22:02:11.390619   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo sh -c 'journalctl --system --all > /tmp/20211026T220211-system.log']
I1026 22:02:11.689784   12685 remote.go:203] Got the system logs from journald; copying it back...
I1026 22:02:11.689847   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205:/tmp/20211026T220211-system.log /logs/artifacts/a121f945-36a0-11ec-ba9f-a2e2905e9978/n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5-system.log]

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>                              START TEST                                >
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Start Test Suite on Host n1-standard-2-cos-89-16108-534-17-1e7097e6
Flag --logtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
... skipping 60 lines ...
Oct 26 21:17:10.438: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
I1026 21:17:10.494327    1509 image_list.go:171] Pre-pulling images with docker [docker.io/nfvpe/sriov-device-plugin:v3.1 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/e2e-test-images/agnhost:2.33 k8s.gcr.io/e2e-test-images/busybox:1.29-2 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 k8s.gcr.io/e2e-test-images/ipc-utils:1.3 k8s.gcr.io/e2e-test-images/nginx:1.14-2 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1 k8s.gcr.io/e2e-test-images/nonewprivs:1.3 k8s.gcr.io/e2e-test-images/nonroot:1.2 k8s.gcr.io/e2e-test-images/perl:5.26 k8s.gcr.io/e2e-test-images/volume/gluster:1.3 k8s.gcr.io/e2e-test-images/volume/nfs:1.3 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa k8s.gcr.io/pause:3.6 k8s.gcr.io/stress:v1 quay.io/kubevirt/device-plugin-kvm]
I1026 21:18:40.845260    1509 server.go:102] Starting server "services" with command "/tmp/node-e2e-20211026T211704/e2e_node.test --run-services-mode --bearer-token=2-5_SQKd_pSnMc2X --test.timeout=24h0m0s --ginkgo.seed=1635283029 --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\] --ginkgo.slowSpecThreshold=5.00000 --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-cos-89-16108-534-17-1e7097e6 --report-dir=/tmp/node-e2e-20211026T211704/results --report-prefix=cos-stable1 --image-description=cos-89-16108-534-17 --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kubelet-flags=--kernel-memcg-notification=true --kubelet-flags=--cluster-domain=cluster.local --dns-domain=cluster.local --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags=--cgroups-per-qos=true --cgroup-root=/"
I1026 21:18:40.845305    1509 util.go:48] Running readiness check for service "services"
I1026 21:18:40.845373    1509 server.go:130] Output file for server "services": /tmp/node-e2e-20211026T211704/results/services.log
I1026 21:18:40.845822    1509 server.go:160] Waiting for server "services" start command to complete
W1026 21:18:44.699599    1509 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
I1026 21:18:45.701848    1509 services.go:70] Node services started.
I1026 21:18:45.701866    1509 kubelet.go:100] Starting kubelet
W1026 21:18:45.701949    1509 feature_gate.go:235] Setting deprecated feature gate DynamicKubeletConfig=true. It will be removed in a future release.
I1026 21:18:45.701969    1509 feature_gate.go:245] feature gates: &{map[DynamicKubeletConfig:true LocalStorageCapacityIsolation:true]}
I1026 21:18:45.704187    1509 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true -p StandardError=file:/tmp/node-e2e-20211026T211704/results/kubelet.log --unit=kubelet-20211026T211704.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-cos-89-16108-534-17-1e7097e6 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/"
I1026 21:18:45.704313    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:18:45.704369    1509 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20211026T211704/results/kubelet.log
I1026 21:18:45.704678    1509 server.go:171] Running health check for service "kubelet"
I1026 21:18:45.704697    1509 util.go:48] Running readiness check for service "kubelet"
W1026 21:18:46.704980    1509 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W1026 21:18:46.705051    1509 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I1026 21:18:47.706588    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:18:47.706940    1509 services.go:80] Kubelet started.
I1026 21:18:47.706961    1509 e2e_node_suite_test.go:217] Wait for the node to be ready
Oct 26 21:18:57.757: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
SSSSSSSSSSSSSSSS
------------------------------
... skipping 40 lines ...
I1026 21:19:09.738213    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:19:10.739425    1509 server.go:182] Initial health check passed for service "kubelet"
STEP: setting initial state "correct"
I1026 21:19:21.751293    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:19:21.761273    1509 server.go:171] Running health check for service "kubelet"
I1026 21:19:21.761303    1509 util.go:48] Running readiness check for service "kubelet"
STEP: from "correct" to "fail-parse"
I1026 21:19:22.762844    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:19:33.776293    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:19:33.783602    1509 server.go:171] Running health check for service "kubelet"
I1026 21:19:33.783629    1509 util.go:48] Running readiness check for service "kubelet"
STEP: back to "correct" from "fail-parse"
I1026 21:19:34.785747    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:19:45.800359    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:19:45.816963    1509 server.go:171] Running health check for service "kubelet"
I1026 21:19:45.816993    1509 util.go:48] Running readiness check for service "kubelet"
STEP: from "correct" to "fail-validate"
I1026 21:19:46.819153    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:19:57.830990    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:19:57.840073    1509 server.go:171] Running health check for service "kubelet"
I1026 21:19:57.840103    1509 util.go:48] Running readiness check for service "kubelet"
STEP: back to "correct" from "fail-validate"
I1026 21:19:58.841782    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:08.852614    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:08.861461    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:08.861496    1509 util.go:48] Running readiness check for service "kubelet"
STEP: setting initial state "fail-parse"
I1026 21:20:09.863077    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:21.876259    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:21.884592    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:21.884619    1509 util.go:48] Running readiness check for service "kubelet"
STEP: from "fail-parse" to "fail-validate"
I1026 21:20:22.885945    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:33.899444    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:33.908462    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:33.908492    1509 util.go:48] Running readiness check for service "kubelet"
STEP: back to "fail-parse" from "fail-validate"
I1026 21:20:34.910579    1509 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:46.924798    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:46.935622    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:46.935656    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:20:47.937939    1509 server.go:182] Initial health check passed for service "kubelet"
STEP: setting initial state "fail-validate"
I1026 21:20:59.952683    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:59.960610    1509 server.go:171] Running health check for service "kubelet"
I1026 21:20:59.960640    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:21:00.961886    1509 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 158 lines ...
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:25:16.531030    1509 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
Oct 26 21:25:16.547: INFO: Get running kubelet with systemctl: UNIT                                                               LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
home-kubernetes-containerized_mounter-rootfs-var-lib-kubelet.mount loaded active mounted /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
kubelet-20211026T211704.service                                    loaded active exited  /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-cos-89-16108-534-17-1e7097e6 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

2 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:25:16.579278    1509 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:25:26.630328    1509 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:25:29.252727    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:25:29.260863    1509 server.go:171] Running health check for service "kubelet"
I1026 21:25:29.260894    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:25:30.262344    1509 server.go:182] Initial health check passed for service "kubelet"
... skipping 878 lines ...
Oct 26 21:33:07.536: INFO: Waiting for pod pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad to disappear
Oct 26 21:33:07.539: INFO: Pod pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad no longer exists
STEP: Waiting for checkpoint to be removed
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad"
Oct 26 21:33:07.547: INFO: Checkpoint of "pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad" still exists: [/var/lib/dockershim/sandbox/e002550ddd1cb011a2743d02e00bd9f76db3ab9e1ec41f21fe9654146b6ac1ea]
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt667dec0c-01ab-43d8-9666-b3d54585e3ad"
Oct 26 21:33:17.557: INFO: grep from dockershim checkpoint directory returns error: exit status 1
[AfterEach] [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:33:17.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dockerhism-checkpoint-test-245" for this suite.

• [SLOW TEST:16.071 seconds]
... skipping 259 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:44:14.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resource-usage-3082" for this suite.
[AfterEach] [sig-node] Resource-usage [Serial] [Slow]
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:60
W1026 21:44:15.007727    1509 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:44:15.045: INFO: runtime operation error metrics:
node "n1-standard-2-cos-89-16108-534-17-1e7097e6" runtime operation error rate:
operation "remove_container": total - 13; error rate - 0.000000; timeout rate - 0.000000
operation "create_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
operation "inspect_container": total - 238; error rate - 0.033613; timeout rate - 0.000000
operation "start_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
operation "list_images": total - 91; error rate - 0.000000; timeout rate - 0.000000
operation "stop_container": total - 27; error rate - 0.000000; timeout rate - 0.000000
operation "version": total - 194; error rate - 0.000000; timeout rate - 0.000000
operation "list_containers": total - 2507; error rate - 0.000000; timeout rate - 0.000000
operation "info": total - 0; error rate - NaN; timeout rate - NaN
operation "inspect_image": total - 92; error rate - 0.000000; timeout rate - 0.000000



• [SLOW TEST:653.393 seconds]
[sig-node] Resource-usage [Serial] [Slow]
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
... skipping 90 lines ...
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:57:00.475862    1509 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:48670->127.0.0.1:10255: read: connection reset by peer
Oct 26 21:57:00.489: INFO: Get running kubelet with systemctl: UNIT                                                               LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
home-kubernetes-containerized_mounter-rootfs-var-lib-kubelet.mount loaded active mounted /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       
kubelet-20211026T211704.service                                    loaded active exited  /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-cos-89-16108-534-17-1e7097e6 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

2 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:57:00.516912    1509 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:57:10.580152    1509 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:57:12.977803    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 21:57:12.987687    1509 server.go:171] Running health check for service "kubelet"
I1026 21:57:12.987715    1509 util.go:48] Running readiness check for service "kubelet"
I1026 21:57:13.988978    1509 server.go:182] Initial health check passed for service "kubelet"
... skipping 171 lines ...
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not generate events for too old log
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not change node condition for too old log
STEP: Inject 1 logs: "permanent error 1"
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should generate event for old log within lookback duration
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 3 temp events generated
STEP: Wait for 3 total events generated
STEP: Make sure only 3 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should change node condition for old log within lookback duration
STEP: Inject 1 logs: "permanent error 1"
STEP: Wait for 3 temp events generated
STEP: Wait for 4 total events generated
STEP: Make sure only 4 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should generate event for new log
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 6 temp events generated
STEP: Wait for 7 total events generated
STEP: Make sure only 7 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not update node condition with the same reason
STEP: Inject 1 logs: "permanent error 1different message"
STEP: Wait for 6 temp events generated
STEP: Wait for 7 total events generated
STEP: Make sure only 7 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should change node condition for new log
STEP: Inject 1 logs: "permanent error 2"
STEP: Wait for 6 temp events generated
STEP: Wait for 8 total events generated
STEP: Make sure only 8 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
[AfterEach] SystemLogMonitor
... skipping 70 lines ...
Oct 26 22:01:21.113: INFO: Skipping waiting for service account
[BeforeEach] Downward API tests for local ephemeral storage
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:38
[It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:42
STEP: Creating a pod to test downward api env vars
Oct 26 22:01:21.118: INFO: Waiting up to 5m0s for pod "downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae" in namespace "downward-api-5715" to be "Succeeded or Failed"
Oct 26 22:01:21.120: INFO: Pod "downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.389055ms
Oct 26 22:01:23.123: INFO: Pod "downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005294118s
STEP: Saw pod success
Oct 26 22:01:23.123: INFO: Pod "downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae" satisfied condition "Succeeded or Failed"
Oct 26 22:01:23.125: INFO: Trying to get logs from node n1-standard-2-cos-89-16108-534-17-1e7097e6 pod downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae container dapi-container: <nil>
STEP: delete the pod
Oct 26 22:01:23.135: INFO: Waiting for pod downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae to disappear
Oct 26 22:01:23.136: INFO: Pod downward-api-421796bb-5357-48da-9dd4-ea14520cc3ae no longer exists
[AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 67 lines ...
Oct 26 22:01:27.185: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-two is Pending, waiting for it to be Running (with Ready = true)
Oct 26 22:01:27.186: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-one is Pending, waiting for it to be Running (with Ready = true)
Oct 26 22:01:27.187: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-three is Pending, waiting for it to be Running (with Ready = true)
I1026 22:01:28.274883    1509 server.go:222] Restarting server "kubelet" with restart command
I1026 22:01:28.283602    1509 server.go:171] Running health check for service "kubelet"
I1026 22:01:28.283631    1509 util.go:48] Running readiness check for service "kubelet"
W1026 22:01:28.395387    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.395577    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.395928    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.396037    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.396117    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:28.396192    1509 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
Oct 26 22:01:29.185: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-three is Pending, waiting for it to be Running (with Ready = true)
Oct 26 22:01:29.186: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-two is Pending, waiting for it to be Running (with Ready = true)
Oct 26 22:01:29.186: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-one is Pending, waiting for it to be Running (with Ready = true)
I1026 22:01:29.285174    1509 server.go:182] Initial health check passed for service "kubelet"
Oct 26 22:01:31.189: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-one is Running (Ready = false)
Oct 26 22:01:31.189: INFO: The status of Pod gc-test-pod-many-containers-many-restarts-three is Running (Ready = false)
... skipping 48 lines ...
---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
I1026 22:02:09.543088    1509 e2e_node_suite_test.go:237] Stopping node services...
I1026 22:02:09.543099    1509 server.go:257] Kill server "services"
I1026 22:02:09.543110    1509 server.go:294] Killing process 2762 (services) with -TERM
E1026 22:02:09.606841    1509 services.go:95] Failed to stop services: error stopping "services": waitid: no child processes
I1026 22:02:09.606875    1509 server.go:257] Kill server "kubelet"
I1026 22:02:09.617184    1509 services.go:156] Fetching log files...
I1026 22:02:09.617267    1509 services.go:165] Get log file "containerd.log" with journalctl command [-u containerd].
I1026 22:02:09.642622    1509 services.go:165] Get log file "containerd-installation.log" with journalctl command [-u containerd-installation].
I1026 22:02:09.646705    1509 services.go:165] Get log file "kern.log" with journalctl command [-k].
I1026 22:02:09.664167    1509 services.go:165] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I1026 22:02:09.953851    1509 services.go:165] Get log file "docker.log" with journalctl command [-u docker].
I1026 22:02:09.964798    1509 e2e_node_suite_test.go:242] Tests Finished

JUnit report was created: /tmp/node-e2e-20211026T211704/results/junit_cos-stable1_01.xml

Ran 13 of 97 Specs in 2700.222 seconds
FAIL! -- 13 Passed | 0 Failed | 1 Pending | 83 Skipped

Ginkgo ran 1 suite in 45m0.424709442s
Test Suite Failed

Failure Finished Test Suite on Host n1-standard-2-cos-89-16108-534-17-1e7097e6
command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@35.238.54.135 -- sudo sh -c 'cd /tmp/node-e2e-20211026T211704 && timeout -k 30s 2700.000000s ./ginkgo  -focus="\[Serial\]"  -skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]"  -untilItFails=false  ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-cos-89-16108-534-17-1e7097e6 --report-dir=/tmp/node-e2e-20211026T211704/results --report-prefix=cos-stable1 --image-description="cos-89-16108-534-17" --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20211026T211704/mounter --kubelet-flags=--kernel-memcg-notification=true --kubelet-flags="--cluster-domain=cluster.local" --dns-domain="cluster.local" --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"'] failed with error: exit status 124
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
<                              FINISH TEST                               <
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

I1026 22:02:11.941382   12685 remote.go:123] Copying test artifacts from "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"
I1026 22:02:11.941500   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine -r prow@34.68.40.205:/tmp/node-e2e-20211026T211704/results/*.log /logs/artifacts/a121f945-36a0-11ec-ba9f-a2e2905e9978/n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5]
I1026 22:02:12.256638   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo ls /tmp/node-e2e-20211026T211704/results/*.json]
I1026 22:02:12.502861   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine -r prow@34.68.40.205:/tmp/node-e2e-20211026T211704/results/*.json /logs/artifacts/a121f945-36a0-11ec-ba9f-a2e2905e9978]
I1026 22:02:12.745704   12685 ssh.go:117] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo ls /tmp/node-e2e-20211026T211704/results/junit*]
I1026 22:02:12.987093   12685 ssh.go:117] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205:/tmp/node-e2e-20211026T211704/results/junit* /logs/artifacts/a121f945-36a0-11ec-ba9f-a2e2905e9978]
I1026 22:02:13.412803   12685 run_remote.go:856] Deleting instance "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5"

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>                              START TEST                                >
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Start Test Suite on Host n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
... skipping 61 lines ...
Oct 26 21:17:11.645: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
I1026 21:17:11.698442    3009 image_list.go:171] Pre-pulling images with docker [docker.io/nfvpe/sriov-device-plugin:v3.1 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/e2e-test-images/agnhost:2.33 k8s.gcr.io/e2e-test-images/busybox:1.29-2 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 k8s.gcr.io/e2e-test-images/ipc-utils:1.3 k8s.gcr.io/e2e-test-images/nginx:1.14-2 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1 k8s.gcr.io/e2e-test-images/nonewprivs:1.3 k8s.gcr.io/e2e-test-images/nonroot:1.2 k8s.gcr.io/e2e-test-images/perl:5.26 k8s.gcr.io/e2e-test-images/volume/gluster:1.3 k8s.gcr.io/e2e-test-images/volume/nfs:1.3 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa k8s.gcr.io/pause:3.6 k8s.gcr.io/stress:v1 quay.io/kubevirt/device-plugin-kvm]
I1026 21:18:43.076825    3009 server.go:102] Starting server "services" with command "/tmp/node-e2e-20211026T211704/e2e_node.test --run-services-mode --bearer-token=w_TwbbHZDjXBgAEJ --test.timeout=24h0m0s --ginkgo.seed=1635283031 --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\] --ginkgo.slowSpecThreshold=5.00000 --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --report-dir=/tmp/node-e2e-20211026T211704/results --report-prefix=ubuntu --image-description=ubuntu-gke-2004-1-20-v20210401 --kubelet-flags=--kernel-memcg-notification=true --kubelet-flags=--cluster-domain=cluster.local --dns-domain=cluster.local --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags=--cgroups-per-qos=true --cgroup-root=/"
I1026 21:18:43.076898    3009 util.go:48] Running readiness check for service "services"
I1026 21:18:43.076991    3009 server.go:130] Output file for server "services": /tmp/node-e2e-20211026T211704/results/services.log
I1026 21:18:43.077397    3009 server.go:160] Waiting for server "services" start command to complete
W1026 21:18:44.077578    3009 util.go:104] Health check on "https://127.0.0.1:6443/healthz" failed, error=Head "https://127.0.0.1:6443/healthz": dial tcp 127.0.0.1:6443: connect: connection refused
W1026 21:18:47.932074    3009 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
I1026 21:18:48.933800    3009 services.go:70] Node services started.
I1026 21:18:48.933828    3009 kubelet.go:100] Starting kubelet
W1026 21:18:48.933917    3009 feature_gate.go:235] Setting deprecated feature gate DynamicKubeletConfig=true. It will be removed in a future release.
I1026 21:18:48.933932    3009 feature_gate.go:245] feature gates: &{map[DynamicKubeletConfig:true LocalStorageCapacityIsolation:true]}
I1026 21:18:48.935581    3009 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true -p StandardError=file:/tmp/node-e2e-20211026T211704/results/kubelet.log --unit=kubelet-20211026T211704.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/"
I1026 21:18:48.935623    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:18:48.935697    3009 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20211026T211704/results/kubelet.log
I1026 21:18:48.936166    3009 server.go:171] Running health check for service "kubelet"
I1026 21:18:48.936187    3009 util.go:48] Running readiness check for service "kubelet"
W1026 21:18:49.936193    3009 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
W1026 21:18:49.936536    3009 util.go:104] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I1026 21:18:50.937594    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:18:50.937655    3009 services.go:80] Kubelet started.
I1026 21:18:50.937669    3009 e2e_node_suite_test.go:217] Wait for the node to be ready
Oct 26 21:19:00.988: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing 
  should complete pod sandbox clean up
... skipping 21 lines ...
Oct 26 21:19:05.118: INFO: Waiting for pod pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454 to disappear
Oct 26 21:19:05.120: INFO: Pod pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454 still exists
Oct 26 21:19:07.120: INFO: Waiting for pod pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454 to disappear
Oct 26 21:19:07.124: INFO: Pod pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454 no longer exists
STEP: Waiting for checkpoint to be removed
STEP: Search checkpoints containing "pod-checkpoint-missing7965d949-f7df-46a4-9d98-1f16a8984454"
Oct 26 21:19:07.139: INFO: grep from dockershim checkpoint directory returns error: exit status 1
[AfterEach] [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:19:07.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dockerhism-checkpoint-test-1757" for this suite.

• [SLOW TEST:6.093 seconds]
... skipping 37 lines ...
Oct 26 21:19:23.220: INFO: Checkpoint of "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14" still exists: [/var/lib/dockershim/sandbox/9328ec8244e1f00ea26acdd6f738628f7355ea163f4c6d52facd7f9874776e4f]
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14"
Oct 26 21:19:33.221: INFO: Checkpoint of "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14" still exists: [/var/lib/dockershim/sandbox/9328ec8244e1f00ea26acdd6f738628f7355ea163f4c6d52facd7f9874776e4f]
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14"
Oct 26 21:19:43.220: INFO: Checkpoint of "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14" still exists: [/var/lib/dockershim/sandbox/9328ec8244e1f00ea26acdd6f738628f7355ea163f4c6d52facd7f9874776e4f]
STEP: Search checkpoints containing "pod-checkpoint-no-disrupt27b1710b-ee85-4f0d-be83-6d7ce41aba14"
Oct 26 21:19:53.221: INFO: grep from dockershim checkpoint directory returns error: exit status 1
[AfterEach] [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:19:53.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dockerhism-checkpoint-test-9196" for this suite.

• [SLOW TEST:46.086 seconds]
... skipping 22 lines ...
I1026 21:19:57.017182    3009 util.go:48] Running readiness check for service "kubelet"
STEP: setting initial state "correct"
I1026 21:19:58.019279    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:09.032065    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:09.047451    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:09.047482    3009 util.go:48] Running readiness check for service "kubelet"
STEP: from "correct" to "fail-parse"
I1026 21:20:10.049845    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:21.062670    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:21.073766    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:21.073797    3009 util.go:48] Running readiness check for service "kubelet"
STEP: back to "correct" from "fail-parse"
I1026 21:20:22.075719    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:33.089026    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:33.100371    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:33.100397    3009 util.go:48] Running readiness check for service "kubelet"
STEP: from "correct" to "fail-validate"
I1026 21:20:34.102145    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:45.115911    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:45.124916    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:45.124944    3009 util.go:48] Running readiness check for service "kubelet"
STEP: back to "correct" from "fail-validate"
I1026 21:20:46.127143    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:20:56.138279    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:20:56.152458    3009 server.go:171] Running health check for service "kubelet"
I1026 21:20:56.152490    3009 util.go:48] Running readiness check for service "kubelet"
STEP: setting initial state "fail-parse"
I1026 21:20:57.153812    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:21:07.172474    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:21:07.193181    3009 server.go:171] Running health check for service "kubelet"
I1026 21:21:07.193227    3009 util.go:48] Running readiness check for service "kubelet"
STEP: from "fail-parse" to "fail-validate"
I1026 21:21:08.194678    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:21:18.205628    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:21:18.220992    3009 server.go:171] Running health check for service "kubelet"
I1026 21:21:18.221026    3009 util.go:48] Running readiness check for service "kubelet"
STEP: back to "fail-parse" from "fail-validate"
I1026 21:21:19.223034    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:21:30.236247    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:21:30.251851    3009 server.go:171] Running health check for service "kubelet"
I1026 21:21:30.251893    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:21:31.252676    3009 server.go:182] Initial health check passed for service "kubelet"
STEP: setting initial state "fail-validate"
I1026 21:21:42.262644    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:21:42.301313    3009 server.go:171] Running health check for service "kubelet"
I1026 21:21:42.301347    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:21:43.302402    3009 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 26 lines ...
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename device-plugin-gpus-errors
Oct 26 21:21:55.485: INFO: Skipping waiting for service account
[BeforeEach] DevicePlugin
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:68
STEP: Ensuring that Nvidia GPUs exists on the node
Oct 26 21:21:55.495: INFO: check for nvidia GPUs failed. Got Error: exit status 1
[AfterEach] DevicePlugin
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:91
[AfterEach] [sig-node] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:21:55.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "device-plugin-gpus-errors-812" for this suite.
... skipping 111 lines ...
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed.
, kubelet-20211026T211704
W1026 21:25:46.684101    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:56780->127.0.0.1:10255: read: connection reset by peer
Oct 26 21:25:46.702: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
  kubelet-20211026T211704.service loaded active running /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:25:46.746115    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:25:56.799986    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:25:58.593961    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:25:58.603452    3009 server.go:171] Running health check for service "kubelet"
I1026 21:25:58.603483    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:25:59.605574    3009 server.go:182] Initial health check passed for service "kubelet"
... skipping 93 lines ...
    keeps GPU assignation to pods after the device plugin has been removed.
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/gpu_device_plugin_test.go:119
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] Without SRIOV devices in the system 
  should return the expected error with the feature gate disabled
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:681
[BeforeEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename podresources-test
Oct 26 21:27:48.076: INFO: Skipping waiting for service account
[It] should return the expected error with the feature gate disabled
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:681
Oct 26 21:27:48.076: INFO: Only supported when KubeletPodResourcesGetAllocatable feature is disabled
[AfterEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:27:48.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podresources-test-2828" for this suite.

S [SKIPPING] [0.012 seconds]
[sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
  Without SRIOV devices in the system
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:620
    should return the expected error with the feature gate disabled [It]
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:681

    Only supported when KubeletPodResourcesGetAllocatable feature is disabled

    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/podresources_test.go:682
------------------------------
... skipping 380 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:40:22.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resource-usage-5948" for this suite.
[AfterEach] [sig-node] Resource-usage [Serial] [Slow]
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/resource_usage_test.go:60
W1026 21:40:22.811579    3009 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:40:22.836: INFO: runtime operation error metrics:
node "n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5" runtime operation error rate:
operation "remove_container": total - 23; error rate - 0.000000; timeout rate - 0.000000
operation "start_container": total - 22; error rate - 0.000000; timeout rate - 0.000000
operation "inspect_container": total - 246; error rate - 0.004065; timeout rate - 0.000000
operation "info": total - 0; error rate - NaN; timeout rate - NaN
operation "stop_container": total - 38; error rate - 0.000000; timeout rate - 0.000000
operation "version": total - 195; error rate - 0.000000; timeout rate - 0.000000
operation "inspect_image": total - 95; error rate - 0.000000; timeout rate - 0.000000
operation "list_containers": total - 2521; error rate - 0.000000; timeout rate - 0.000000
operation "list_images": total - 89; error rate - 0.000000; timeout rate - 0.000000
operation "create_container": total - 22; error rate - 0.000000; timeout rate - 0.000000



• [SLOW TEST:651.314 seconds]
[sig-node] Resource-usage [Serial] [Slow]
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
... skipping 33 lines ...
STEP: Building a namespace api object, basename topology-manager-test
Oct 26 21:40:26.891: INFO: Skipping waiting for service account
[It] run Topology Manager policy test suite
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:870
STEP: by configuring Topology Manager policy to single-numa-node
Oct 26 21:40:26.897: INFO: Configuring topology Manager policy to single-numa-node
Oct 26 21:40:26.900: INFO: failed to find any VF device from [{0000:00:00.0 -1 false false} {0000:00:01.0 -1 false false} {0000:00:01.3 -1 false false} {0000:00:03.0 -1 false false} {0000:00:04.0 -1 false false} {0000:00:05.0 -1 false false}]
Oct 26 21:40:26.900: INFO: New kubelet config is {{ } %!s(bool=true) /tmp/node-e2e-20211026T211704/static-pods570862951 {1m0s} {10s} {20s}  map[] 0.0.0.0 %!s(int32=10250) %!s(int32=10255) /usr/libexec/kubernetes/kubelet-plugins/volume/exec/  /var/lib/kubelet/pki/kubelet.crt /var/lib/kubelet/pki/kubelet.key []  %!s(bool=false) %!s(bool=false) {{} {%!s(bool=false) {2m0s}} {%!s(bool=true)}} {AlwaysAllow {{5m0s} {30s}}} %!s(int32=5) %!s(int32=10) %!s(int32=5) %!s(int32=10) %!s(bool=true) %!s(bool=false) %!s(int32=10248) 127.0.0.1 %!s(int32=-999) cluster.local [] {4h0m0s} {10s} {5m0s} %!s(int32=40) {2m0s} %!s(int32=85) %!s(int32=80) {10s} /kubelet.slice  / %!s(bool=true) cgroupfs static map[] {1s} None single-numa-node container map[] {2m0s} promiscuous-bridge %!s(int32=110) 10.100.0.0/24 %!s(int64=-1) /etc/resolv.conf %!s(bool=false) %!s(bool=true) {100ms} %!s(int64=1000000) %!s(int32=50) application/vnd.kubernetes.protobuf %!s(int32=5) %!s(int32=10) %!s(bool=false) map[memory.available:250Mi nodefs.available:10% nodefs.inodesFree:5%] map[] map[] {30s} %!s(int32=0) map[nodefs.available:5% nodefs.inodesFree:5%] %!s(int32=0) %!s(bool=true) %!s(bool=false) %!s(bool=true) %!s(int32=14) %!s(int32=15) map[CPUManager:%!s(bool=true) DynamicKubeletConfig:%!s(bool=true) LocalStorageCapacityIsolation:%!s(bool=true) TopologyManager:%!s(bool=true)] %!s(bool=true) {} 10Mi %!s(int32=5) Watch [] %!s(bool=true) map[] map[cpu:200m]   [pods]   {text %!s(bool=false) {{%!s(bool=false) {{{%!s(int64=0) %!s(resource.Scale=0)} {%!s(*inf.Dec=<nil>)} 0 DecimalSI}}}}} %!s(bool=true) {0s} {0s} [] %!s(bool=true) %!s(bool=true) %!s(bool=false) %!s(*float64=0xc000b7d6a8)}
W1026 21:40:26.920466    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:40:30.472684    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:40:30.493381    3009 server.go:171] Running health check for service "kubelet"
I1026 21:40:30.493424    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:40:31.495109    3009 server.go:182] Initial health check passed for service "kubelet"
... skipping 35 lines ...
I1026 21:42:17.360727    3009 remote_runtime.go:54] "Connecting to runtime service" endpoint="unix:///var/run/dockershim.sock"
I1026 21:42:17.360872    3009 remote_image.go:41] "Connecting to image service" endpoint="unix:///var/run/dockershim.sock"
Oct 26 21:42:18.365: INFO: Skipping rest of the CPU Manager tests since CPU capacity < 3
[AfterEach] With kubeconfig updated to static CPU Manager policy run the Topology Manager tests
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/topology_manager_test.go:925
W1026 21:42:18.390375    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
W1026 21:42:22.908909    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:42:22.909466    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:42:22.910016    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:42:22.910016    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
Oct 26 21:42:23.413: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b2111dc-a858-45f8-9706-e35a8cd8a1f6] Cache-Control:[no-cache, private] Content-Length:[208] Content-Type:[application/json] Date:[Tue, 26 Oct 2021 21:42:23 GMT]] Body:0xc001468340 ContentLength:208 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001038000 TLS:0xc000d980b0}
I1026 21:42:23.639668    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:42:23.651610    3009 server.go:171] Running health check for service "kubelet"
I1026 21:42:23.651644    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:42:24.653067    3009 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] [sig-node] Topology Manager [Serial] [Feature:TopologyManager][NodeFeature:TopologyManager]
... skipping 174 lines ...
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not generate events for too old log
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not change node condition for too old log
STEP: Inject 1 logs: "permanent error 1"
STEP: Wait for 0 temp events generated
STEP: Wait for 0 total events generated
STEP: Make sure only 0 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should generate event for old log within lookback duration
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 3 temp events generated
STEP: Wait for 3 total events generated
STEP: Make sure only 3 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should change node condition for old log within lookback duration
STEP: Inject 1 logs: "permanent error 1"
STEP: Wait for 3 temp events generated
STEP: Wait for 4 total events generated
STEP: Make sure only 4 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should generate event for new log
STEP: Inject 3 logs: "temporary error"
STEP: Wait for 6 temp events generated
STEP: Wait for 7 total events generated
STEP: Make sure only 7 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should not update node condition with the same reason
STEP: Inject 1 logs: "permanent error 1different message"
STEP: Wait for 6 temp events generated
STEP: Wait for 7 total events generated
STEP: Make sure only 7 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
STEP: should change node condition for new log
STEP: Inject 1 logs: "permanent error 2"
STEP: Wait for 6 temp events generated
STEP: Wait for 8 total events generated
STEP: Make sure only 8 total events generated
STEP: Make sure node condition "TestCondition" is set
STEP: Make sure node condition "TestCondition" is stable
[AfterEach] SystemLogMonitor
... skipping 89 lines ...
Oct 26 21:44:40.508: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:44:40.508: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 0
Oct 26 21:44:40.508: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:44:40.508: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:44:40.510: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:44:40.510: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
STEP: making sure pressure from test has surfaced before continuing
STEP: Waiting for NodeCondition: NoPressure to no longer exist on the node
Oct 26 21:45:00.531: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:00.531: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:00.531: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:00.531: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
... skipping 11 lines ...
Oct 26 21:45:00.567: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:00.567: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:00.567: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:00.581: INFO: Kubelet Metrics: []
Oct 26 21:45:00.585: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:00.585: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:02.601: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:02.601: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:02.601: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:02.601: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:02.601: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:02.601: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:02.601: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:02.601: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:02.615: INFO: Kubelet Metrics: []
Oct 26 21:45:02.620: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:02.621: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:04.636: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:04.636: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000680960
Oct 26 21:45:04.636: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:04.636: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:04.636: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:04.636: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:04.636: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:04.636: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:04.651: INFO: Kubelet Metrics: []
Oct 26 21:45:04.655: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:04.655: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:06.677: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:06.677: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:06.677: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:06.677: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:06.677: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:06.677: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:06.677: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:06.677: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:06.701: INFO: Kubelet Metrics: []
Oct 26 21:45:06.704: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:06.704: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:08.719: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:08.719: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:08.719: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:08.719: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:08.719: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:08.719: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:08.719: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:08.719: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:08.733: INFO: Kubelet Metrics: []
Oct 26 21:45:08.736: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:08.736: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:10.754: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:10.754: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:10.754: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:10.754: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:10.754: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:10.754: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:10.754: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:10.754: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:10.765: INFO: Kubelet Metrics: []
Oct 26 21:45:10.770: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:10.770: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:12.790: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:12.790: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:12.790: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:12.790: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:12.790: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:12.790: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:12.790: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:12.790: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:12.804: INFO: Kubelet Metrics: []
Oct 26 21:45:12.808: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:12.808: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:14.825: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:14.825: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000828416
Oct 26 21:45:14.825: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:14.825: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:14.825: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:14.825: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:14.825: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:14.825: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:14.848: INFO: Kubelet Metrics: []
Oct 26 21:45:14.853: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:14.853: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:16.872: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:16.872: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:16.872: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:16.872: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:16.872: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:16.872: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:16.872: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:16.872: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:16.891: INFO: Kubelet Metrics: []
Oct 26 21:45:16.894: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:16.894: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:18.915: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:18.915: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:18.915: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:18.915: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:18.915: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:18.915: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:18.915: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:18.915: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:18.929: INFO: Kubelet Metrics: []
Oct 26 21:45:18.933: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:18.933: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:20.953: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:20.953: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:20.953: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:20.953: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:20.953: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:20.953: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:20.953: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:20.953: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:20.975: INFO: Kubelet Metrics: []
Oct 26 21:45:20.979: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:20.979: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:22.992: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:22.992: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:22.993: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:22.993: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:22.993: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:22.993: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:22.993: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:22.993: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:23.021: INFO: Kubelet Metrics: []
Oct 26 21:45:23.027: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:23.027: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:25.050: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:25.050: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000812032
Oct 26 21:45:25.051: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:25.051: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:25.051: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:25.051: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:25.051: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:25.051: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:25.065: INFO: Kubelet Metrics: []
Oct 26 21:45:25.071: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:25.071: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:27.088: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:27.089: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:27.089: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:27.089: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:27.089: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:27.089: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:27.089: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:27.089: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:27.103: INFO: Kubelet Metrics: []
Oct 26 21:45:27.107: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:27.107: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:29.123: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:29.123: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:29.123: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:29.123: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:29.123: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:29.123: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:29.123: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:29.123: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:29.136: INFO: Kubelet Metrics: []
Oct 26 21:45:29.139: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:29.139: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:31.160: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:31.160: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:31.160: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:31.160: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:31.160: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:31.160: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:31.160: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:31.160: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:31.183: INFO: Kubelet Metrics: []
Oct 26 21:45:31.187: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:31.187: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:33.206: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:33.206: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:33.206: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:33.206: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:33.206: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:33.206: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:33.206: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:33.206: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:33.230: INFO: Kubelet Metrics: []
Oct 26 21:45:33.233: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:33.233: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:35.250: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:35.250: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000803840
Oct 26 21:45:35.250: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:35.250: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:35.250: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:35.250: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:35.250: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:35.250: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:35.264: INFO: Kubelet Metrics: []
Oct 26 21:45:35.270: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:35.270: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:37.289: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:37.290: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:37.290: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:37.290: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:37.290: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:37.290: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:37.290: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:37.290: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:37.304: INFO: Kubelet Metrics: []
Oct 26 21:45:37.307: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:37.307: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:39.325: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:39.325: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:39.325: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:39.325: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:39.325: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:39.325: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:39.325: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:39.325: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:39.348: INFO: Kubelet Metrics: []
Oct 26 21:45:39.351: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:39.351: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:41.364: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:41.364: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:41.364: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:41.364: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:41.364: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:41.364: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:41.364: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:41.364: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:41.379: INFO: Kubelet Metrics: []
Oct 26 21:45:41.383: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:41.383: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:43.403: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:43.403: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000795648
Oct 26 21:45:43.403: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:43.403: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:43.403: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:43.403: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:43.403: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:43.403: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:43.416: INFO: Kubelet Metrics: []
Oct 26 21:45:43.422: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:43.422: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:45.439: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:45.439: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:45.439: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:45.439: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:45.439: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:45.439: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:45.439: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:45.439: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:45.450: INFO: Kubelet Metrics: []
Oct 26 21:45:45.454: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:45.454: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:47.471: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:47.471: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:47.471: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:47.471: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:47.471: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:47.471: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:47.471: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:47.471: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:47.498: INFO: Kubelet Metrics: []
Oct 26 21:45:47.501: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:47.501: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:49.533: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:49.533: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:49.533: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:49.533: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:49.533: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:49.533: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:49.533: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:49.533: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:49.546: INFO: Kubelet Metrics: []
Oct 26 21:45:49.549: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:49.549: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:51.567: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:51.567: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:51.567: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:51.567: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:51.567: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:51.567: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:51.567: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:51.567: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:51.581: INFO: Kubelet Metrics: []
Oct 26 21:45:51.586: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:51.586: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:53.601: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:53.601: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000791552
Oct 26 21:45:53.601: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:53.601: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:53.601: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:53.601: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:53.601: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:53.601: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:53.614: INFO: Kubelet Metrics: []
Oct 26 21:45:53.618: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:53.618: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:55.639: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:55.639: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:55.639: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:55.639: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:55.639: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:55.639: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:55.639: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:55.639: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:55.668: INFO: Kubelet Metrics: []
Oct 26 21:45:55.674: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:55.674: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:57.687: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:57.687: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:57.687: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:57.687: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:57.687: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:57.687: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:57.687: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:57.687: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:57.700: INFO: Kubelet Metrics: []
Oct 26 21:45:57.703: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:57.703: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
Oct 26 21:45:59.719: INFO: imageFsInfo.CapacityBytes: 20629221376, imageFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:59.719: INFO: rootFsInfo.CapacityBytes: 20629221376, rootFsInfo.AvailableBytes: 14000771072
Oct 26 21:45:59.719: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-false-pod
Oct 26 21:45:59.719: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:59.719: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:59.719: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:45:59.719: INFO: --- summary Container: emptydir-concealed-disk-over-sizelimit-quotas-false-container UsedBytes: 8192
Oct 26 21:45:59.719: INFO: --- summary Volume: test-volume UsedBytes: 4096
Oct 26 21:45:59.733: INFO: Kubelet Metrics: []
Oct 26 21:45:59.737: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod; phase= Running
Oct 26 21:45:59.737: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-false-pod; phase= Running
STEP: checking eviction ordering and ensuring important pods don't fail
STEP: checking for correctly formatted eviction events
[AfterEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/eviction_test.go:579
STEP: deleting pods
STEP: deleting pod: emptydir-concealed-disk-over-sizelimit-quotas-false-pod
Oct 26 21:46:00.560: INFO: Waiting for pod emptydir-concealed-disk-over-sizelimit-quotas-false-pod to disappear
... skipping 115 lines ...
[It] should set pids.max for Pod
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/pids_test.go:89
STEP: by creating a G pod
I1026 21:47:27.293468    3009 util.go:247] new configuration has taken effect
STEP: checking if the expected pids settings were applied
Oct 26 21:47:27.302: INFO: Pod to run command: expected=1024; actual=$(cat /tmp/pids//kubepods/podc9fe7d52-37d4-4279-8aa8-0335873e10a7/pids.max); if [ "$expected" -ne "$actual" ]; then exit 1; fi; 
Oct 26 21:47:27.306: INFO: Waiting up to 5m0s for pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b" in namespace "pids-limit-test-7896" to be "Succeeded or Failed"
Oct 26 21:47:27.309: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.054461ms
Oct 26 21:47:29.313: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006952893s
Oct 26 21:47:31.318: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012556458s
Oct 26 21:47:33.325: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019780426s
Oct 26 21:47:35.330: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.024344927s
STEP: Saw pod success
Oct 26 21:47:35.330: INFO: Pod "pod18bd7cbe-72d3-4788-a739-c90bc6008c1b" satisfied condition "Succeeded or Failed"
[AfterEach] With config updated with pids limits
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:175
W1026 21:47:35.350435    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:47:44.068410    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:47:44.100463    3009 server.go:171] Running health check for service "kubelet"
I1026 21:47:44.100487    3009 util.go:48] Running readiness check for service "kubelet"
... skipping 24 lines ...
Oct 26 21:47:45.393: INFO: Skipping waiting for service account
[BeforeEach] Downward API tests for local ephemeral storage
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:38
[It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:42
STEP: Creating a pod to test downward api env vars
Oct 26 21:47:45.397: INFO: Waiting up to 5m0s for pod "downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d" in namespace "downward-api-8077" to be "Succeeded or Failed"
Oct 26 21:47:45.400: INFO: Pod "downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206113ms
Oct 26 21:47:47.403: INFO: Pod "downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005860821s
STEP: Saw pod success
Oct 26 21:47:47.403: INFO: Pod "downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d" satisfied condition "Succeeded or Failed"
Oct 26 21:47:47.406: INFO: Trying to get logs from node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 pod downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d container dapi-container: <nil>
STEP: delete the pod
Oct 26 21:47:47.417: INFO: Waiting for pod downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d to disappear
Oct 26 21:47:47.419: INFO: Pod downward-api-d4d8a11d-f819-42f8-b66d-7087ee650f3d no longer exists
[AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 54 lines ...
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed.
, kubelet-20211026T211704
W1026 21:47:47.584686    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:57242->127.0.0.1:10255: read: connection reset by peer
Oct 26 21:47:47.613: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB    DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
  kubelet-20211026T211704.service loaded active exited /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:47:47.663560    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:47:57.722237    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:47:59.117182    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:47:59.134751    3009 server.go:171] Running health check for service "kubelet"
I1026 21:47:59.134787    3009 util.go:48] Running readiness check for service "kubelet"
W1026 21:47:59.246503    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246503    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246585    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246662    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246689    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246764    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246772    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:47:59.246860    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
I1026 21:48:00.136026    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:48:02.749836    3009 util.go:247] new configuration has taken effect
[It] should succeed to start the pod
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_manager_test.go:758
Oct 26 21:48:02.768: INFO: The status of Pod memory-manager-nonehc2jx is Pending, waiting for it to be Running (with Ready = true)
Oct 26 21:48:04.772: INFO: The status of Pod memory-manager-nonehc2jx is Running (Ready = true)
... skipping 64 lines ...
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename dynamic-kubelet-configuration-test
Oct 26 21:49:43.862: INFO: Skipping waiting for service account
[BeforeEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:82
W1026 21:49:48.610472    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
I1026 21:49:49.256192    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:49:49.268736    3009 server.go:171] Running health check for service "kubelet"
I1026 21:49:49.268770    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:49:50.270726    3009 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:123
... skipping 2 lines ...
STEP: Collecting events from namespace "dynamic-kubelet-configuration-test-4673".
STEP: Found 0 events.
Oct 26 21:51:49.916: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 26 21:51:49.916: INFO: 
Oct 26 21:51:49.918: INFO: 
Logging node info for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:51:49.919: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5    273ccda5-865f-4ac4-bc03-6ac94fb12171 1506 0 2021-10-26 21:18:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{Go-http-client Update v1 2021-10-26 21:18:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {Go-http-client Update v1 2021-10-26 21:42:36 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:hugepages-2Mi":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7817089024 0} {<nil>} 7633876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7554945024 0} {<nil>} 7377876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-26 21:49:49 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-26 21:49:49 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-26 21:49:49 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-26 21:49:49 +0000 UTC,LastTransitionTime:2021-10-26 21:20:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4ad02c548d37340b4ddca55175c0d8bc,SystemUUID:4ad02c54-8d37-340b-4ddc-a55175c0d8bc,BootID:68e32436-f588-4599-84e3-955d79b00fcc,KernelVersion:5.4.0-1039-gke,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:docker://19.3.8,KubeletVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,KubeProxyVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1631162940,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:853285759,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:033a12fe65438751690b519cebd4135a3485771086bcf437212b7b886bb7956c k8s.gcr.io/e2e-test-images/volume/gluster:1.3],SizeBytes:340331177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e k8s.gcr.io/e2e-test-images/volume/nfs:1.3],SizeBytes:263886631,},ContainerImage{Names:[quay.io/kubevirt/device-plugin-kvm@sha256:b44bc0fd6ff8987091bbc7ec630e5ee6683be40d151b4e6635e24afb5807b21a quay.io/kubevirt/device-plugin-kvm:latest],SizeBytes:249864259,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43 k8s.gcr.io/e2e-test-images/agnhost:2.33],SizeBytes:124737480,},ContainerImage{Names:[debian@sha256:4d6ab716de467aad58e91b1b720f0badd7478847ec7a18f66027d0f8a329a43c debian:latest],SizeBytes:123864999,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector/node-problem-detector@sha256:0ce71ef6d759425d22b10e65b439749fe5d13377a188e2fc060b731cdb4e6901 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7],SizeBytes:113172715,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:8285539c79625b192a5e33fc3d21edc1a7776fb9afe15fae3b5037a7a8020839 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2],SizeBytes:96399029,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:90b5cfc5451428aad4dd6af9960640f2506804d35aa05e83c11bf0a46ac318c8 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2],SizeBytes:96397229,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb nfvpe/sriov-device-plugin:v3.1],SizeBytes:25318421,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:647d092bada3b46c449d875adf31d71c1dd29c244e9cca6a04fddf9d6bcac136 k8s.gcr.io/e2e-test-images/ipc-utils:1.3],SizeBytes:10039660,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:682696,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 26 21:51:49.920: INFO: 
Logging kubelet events for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:51:49.921: INFO: 
Logging pods the kubelet thinks is on node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
W1026 21:51:49.940757    3009 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:51:49.964: INFO: 
... skipping 163 lines ...
Oct 26 21:54:04.022: INFO: Skipping waiting for service account
[BeforeEach] Downward API tests for local ephemeral storage
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:38
[It] should provide default limits.ephemeral-storage from node allocatable
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi.go:70
STEP: Creating a pod to test downward api env vars
Oct 26 21:54:04.028: INFO: Waiting up to 5m0s for pod "downward-api-bf664ee4-7d31-45c1-a634-fb328654c779" in namespace "downward-api-1044" to be "Succeeded or Failed"
Oct 26 21:54:04.030: INFO: Pod "downward-api-bf664ee4-7d31-45c1-a634-fb328654c779": Phase="Pending", Reason="", readiness=false. Elapsed: 1.587105ms
Oct 26 21:54:06.033: INFO: Pod "downward-api-bf664ee4-7d31-45c1-a634-fb328654c779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.004711237s
STEP: Saw pod success
Oct 26 21:54:06.033: INFO: Pod "downward-api-bf664ee4-7d31-45c1-a634-fb328654c779" satisfied condition "Succeeded or Failed"
Oct 26 21:54:06.034: INFO: Trying to get logs from node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 pod downward-api-bf664ee4-7d31-45c1-a634-fb328654c779 container dapi-container: <nil>
STEP: delete the pod
Oct 26 21:54:06.057: INFO: Waiting for pod downward-api-bf664ee4-7d31-45c1-a634-fb328654c779 to disappear
Oct 26 21:54:06.058: INFO: Pod downward-api-bf664ee4-7d31-45c1-a634-fb328654c779 no longer exists
[AfterEach] [sig-storage] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 86 lines ...
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/memory_manager_test.go:400
Oct 26 21:54:22.144: INFO: Waiting for pod memory-manager-nonehc2jx to disappear
Oct 26 21:54:22.146: INFO: Pod memory-manager-nonehc2jx no longer exists
Oct 26 21:54:22.177: INFO: Hugepages total is set to 0
W1026 21:54:22.200438    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
I1026 21:54:22.207509    3009 util.go:247] new configuration has taken effect
W1026 21:54:31.748034    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.212154    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
[AfterEach] [sig-node] Memory Manager [Serial] [Feature:MemoryManager]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Oct 26 21:54:32.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "memory-manager-test-5478" for this suite.

S [SKIPPING] in Spec Setup (BeforeEach) [10.081 seconds]
... skipping 41 lines ...
    Skipping ContainerLogRotation test since the container runtime is not remote

    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/container_log_rotation_test.go:48
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  delete and recreate ConfigMap: error while ConfigMap is absent: 
  status and events should match expectations
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:784
[BeforeEach] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]
  /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
STEP: Building a namespace api object, basename dynamic-kubelet-configuration-test
Oct 26 21:54:32.230: INFO: Skipping waiting for service account
[BeforeEach] 
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:82
Oct 26 21:54:32.245: INFO: /configz response status not 200, retrying. Response was: &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[69adaf09-d481-4eda-8f60-b7c1da3c3164] Cache-Control:[no-cache, private] Content-Length:[208] Content-Type:[application/json] Date:[Tue, 26 Oct 2021 21:54:32 GMT]] Body:0xc00119fdc0 ContentLength:208 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000e5b00 TLS:0xc000d998c0}
I1026 21:54:32.578890    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:54:32.589546    3009 server.go:171] Running health check for service "kubelet"
I1026 21:54:32.589578    3009 util.go:48] Running readiness check for service "kubelet"
W1026 21:54:32.749261    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749261    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749340    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749416    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749419    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749490    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749534    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749596    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749624    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749701    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749712    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749795    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749795    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 21:54:32.749864    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
I1026 21:54:33.590699    3009 server.go:182] Initial health check passed for service "kubelet"
I1026 21:54:44.603875    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:54:44.612947    3009 server.go:171] Running health check for service "kubelet"
I1026 21:54:44.612972    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:54:45.614010    3009 server.go:182] Initial health check passed for service "kubelet"
[AfterEach] 
... skipping 3 lines ...
STEP: Collecting events from namespace "dynamic-kubelet-configuration-test-7183".
STEP: Found 0 events.
Oct 26 21:56:45.351: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 26 21:56:45.351: INFO: 
Oct 26 21:56:45.353: INFO: 
Logging node info for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:56:45.355: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5    273ccda5-865f-4ac4-bc03-6ac94fb12171 1692 0 2021-10-26 21:18:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{Go-http-client Update v1 2021-10-26 21:18:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {Go-http-client Update v1 2021-10-26 21:54:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:hugepages-2Mi":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7817089024 0} {<nil>} 7633876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7554945024 0} {<nil>} 7377876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:20:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4ad02c548d37340b4ddca55175c0d8bc,SystemUUID:4ad02c54-8d37-340b-4ddc-a55175c0d8bc,BootID:68e32436-f588-4599-84e3-955d79b00fcc,KernelVersion:5.4.0-1039-gke,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:docker://19.3.8,KubeletVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,KubeProxyVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1631162940,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:853285759,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:033a12fe65438751690b519cebd4135a3485771086bcf437212b7b886bb7956c k8s.gcr.io/e2e-test-images/volume/gluster:1.3],SizeBytes:340331177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e k8s.gcr.io/e2e-test-images/volume/nfs:1.3],SizeBytes:263886631,},ContainerImage{Names:[quay.io/kubevirt/device-plugin-kvm@sha256:b44bc0fd6ff8987091bbc7ec630e5ee6683be40d151b4e6635e24afb5807b21a quay.io/kubevirt/device-plugin-kvm:latest],SizeBytes:249864259,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43 k8s.gcr.io/e2e-test-images/agnhost:2.33],SizeBytes:124737480,},ContainerImage{Names:[debian@sha256:4d6ab716de467aad58e91b1b720f0badd7478847ec7a18f66027d0f8a329a43c debian:latest],SizeBytes:123864999,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector/node-problem-detector@sha256:0ce71ef6d759425d22b10e65b439749fe5d13377a188e2fc060b731cdb4e6901 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7],SizeBytes:113172715,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:8285539c79625b192a5e33fc3d21edc1a7776fb9afe15fae3b5037a7a8020839 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2],SizeBytes:96399029,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:90b5cfc5451428aad4dd6af9960640f2506804d35aa05e83c11bf0a46ac318c8 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2],SizeBytes:96397229,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb nfvpe/sriov-device-plugin:v3.1],SizeBytes:25318421,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:647d092bada3b46c449d875adf31d71c1dd29c244e9cca6a04fddf9d6bcac136 k8s.gcr.io/e2e-test-images/ipc-utils:1.3],SizeBytes:10039660,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:682696,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 26 21:56:45.356: INFO: 
Logging kubelet events for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:56:45.357: INFO: 
Logging pods the kubelet thinks is on node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
W1026 21:56:45.376742    3009 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:56:45.389: INFO: 
... skipping 3 lines ...

• Failure in Spec Setup (BeforeEach) [133.170 seconds]
[sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/framework.go:23
  
  _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:81
    delete and recreate ConfigMap: error while ConfigMap is absent: [BeforeEach]
    _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:783
      status and events should match expectations
      _output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:784

      Timed out after 60.000s.
      Expected
... skipping 51 lines ...
STEP: Collecting events from namespace "dynamic-kubelet-configuration-test-1151".
STEP: Found 0 events.
Oct 26 21:58:45.455: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Oct 26 21:58:45.455: INFO: 
Oct 26 21:58:45.457: INFO: 
Logging node info for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:58:45.458: INFO: Node Info: &Node{ObjectMeta:{n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5    273ccda5-865f-4ac4-bc03-6ac94fb12171 1692 0 2021-10-26 21:18:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{Go-http-client Update v1 2021-10-26 21:18:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}}} } {Go-http-client Update v1 2021-10-26 21:54:32 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:cpu":{},"f:ephemeral-storage":{},"f:hugepages-2Mi":{},"f:memory":{}},"f:capacity":{"f:ephemeral-storage":{},"f:hugepages-2Mi":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:config":{},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7817089024 0} {<nil>} 7633876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7554945024 0} {<nil>} 7377876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:18:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-10-26 21:54:55 +0000 UTC,LastTransitionTime:2021-10-26 21:20:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4ad02c548d37340b4ddca55175c0d8bc,SystemUUID:4ad02c54-8d37-340b-4ddc-a55175c0d8bc,BootID:68e32436-f588-4599-84e3-955d79b00fcc,KernelVersion:5.4.0-1039-gke,OSImage:Ubuntu 20.04.2 LTS,ContainerRuntimeVersion:docker://19.3.8,KubeletVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,KubeProxyVersion:v1.23.0-alpha.3.549+bb7a6b430b242d,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep@sha256:d5d5822ef70f81db66c1271662e1b9d4556fb267ac7ae09dee5d91aa10736431 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.1],SizeBytes:1631162940,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/perl@sha256:c613344cdd31c5055961b078f831ef9d9199fc9111efe6e81bea3f00d78bd979 k8s.gcr.io/e2e-test-images/perl:5.26],SizeBytes:853285759,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/gluster@sha256:033a12fe65438751690b519cebd4135a3485771086bcf437212b7b886bb7956c k8s.gcr.io/e2e-test-images/volume/gluster:1.3],SizeBytes:340331177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/volume/nfs@sha256:3bda73f2428522b0e342af80a0b9679e8594c2126f2b3cca39ed787589741b9e k8s.gcr.io/e2e-test-images/volume/nfs:1.3],SizeBytes:263886631,},ContainerImage{Names:[quay.io/kubevirt/device-plugin-kvm@sha256:b44bc0fd6ff8987091bbc7ec630e5ee6683be40d151b4e6635e24afb5807b21a quay.io/kubevirt/device-plugin-kvm:latest],SizeBytes:249864259,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43 k8s.gcr.io/e2e-test-images/agnhost:2.33],SizeBytes:124737480,},ContainerImage{Names:[debian@sha256:4d6ab716de467aad58e91b1b720f0badd7478847ec7a18f66027d0f8a329a43c debian:latest],SizeBytes:123864999,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector/node-problem-detector@sha256:0ce71ef6d759425d22b10e65b439749fe5d13377a188e2fc060b731cdb4e6901 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7],SizeBytes:113172715,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-is@sha256:8285539c79625b192a5e33fc3d21edc1a7776fb9afe15fae3b5037a7a8020839 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2],SizeBytes:96399029,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/node-perf/npb-ep@sha256:90b5cfc5451428aad4dd6af9960640f2506804d35aa05e83c11bf0a46ac318c8 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2],SizeBytes:96397229,},ContainerImage{Names:[google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 google/cadvisor:latest],SizeBytes:69583040,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[nfvpe/sriov-device-plugin@sha256:518499ed631ff84b43153b8f7624c1aaacb75a721038857509fe690abdf62ddb nfvpe/sriov-device-plugin:v3.1],SizeBytes:25318421,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:18981551,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/ipc-utils@sha256:647d092bada3b46c449d875adf31d71c1dd29c244e9cca6a04fddf9d6bcac136 k8s.gcr.io/e2e-test-images/ipc-utils:1.3],SizeBytes:10039660,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/stress@sha256:f00aa1ddc963a3164aef741aab0fc05074ea96de6cd7e0d10077cf98dd72d594 k8s.gcr.io/stress:v1],SizeBytes:5494760,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:1113554,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:682696,},ContainerImage{Names:[gke-nvidia-installer:fixed],SizeBytes:75,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Oct 26 21:58:45.459: INFO: 
Logging kubelet events for node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
Oct 26 21:58:45.460: INFO: 
Logging pods the kubelet thinks is on node n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
W1026 21:58:45.476535    3009 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
Oct 26 21:58:45.492: INFO: 
... skipping 47 lines ...
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed.
, kubelet-20211026T211704
W1026 21:58:45.628223    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:57450->127.0.0.1:10255: read: connection reset by peer
Oct 26 21:58:45.647: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
  kubelet-20211026T211704.service loaded active running /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 21:58:45.696160    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I1026 21:58:45.865578    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 21:58:45.984936    3009 server.go:171] Running health check for service "kubelet"
I1026 21:58:45.984964    3009 util.go:48] Running readiness check for service "kubelet"
I1026 21:58:46.987242    3009 server.go:182] Initial health check passed for service "kubelet"
STEP: Waiting for hugepages resource to become available on the local node
W1026 21:58:55.755194    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
... skipping 135 lines ...
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed.
, kubelet-20211026T211704
W1026 22:01:21.052368    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": read tcp 127.0.0.1:57558->127.0.0.1:10255: read: connection reset by peer
Oct 26 22:01:21.070: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  
  kubelet-20211026T211704.service loaded active running /tmp/node-e2e-20211026T211704/kubelet --kubeconfig /tmp/node-e2e-20211026T211704/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --dynamic-config-dir /tmp/node-e2e-20211026T211704/dynamic-kubelet-config --network-plugin=kubenet --cni-bin-dir /tmp/node-e2e-20211026T211704/cni/bin --cni-conf-dir /tmp/node-e2e-20211026T211704/cni/net.d --cni-cache-dir /tmp/node-e2e-20211026T211704/cni/cache --hostname-override n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --container-runtime docker --container-runtime-endpoint unix:///var/run/dockershim.sock --config /tmp/node-e2e-20211026T211704/kubelet-config --kernel-memcg-notification=true --cluster-domain=cluster.local --cgroups-per-qos=true --cgroup-root=/

LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

1 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.
, kubelet-20211026T211704
W1026 22:01:21.072304    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.072953    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.073061    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.073218    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.073318    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.073949    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074069    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074137    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074176    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074228    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074263    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074339    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074353    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.074074    3009 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {/var/run/dockershim.sock /var/run/dockershim.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: connection refused". Reconnecting...
W1026 22:01:21.127449    3009 util.go:469] Health check on "http://127.0.0.1:10255/healthz" failed, error=Head "http://127.0.0.1:10255/healthz": dial tcp 127.0.0.1:10255: connect: connection refused
I1026 22:01:21.166007    3009 server.go:222] Restarting server "kubelet" with restart command
I1026 22:01:21.178193    3009 server.go:171] Running health check for service "kubelet"
I1026 22:01:21.178226    3009 util.go:48] Running readiness check for service "kubelet"
I1026 22:01:22.180327    3009 server.go:182] Initial health check passed for service "kubelet"
STEP: Waiting for hugepages resource to become available on the local node
W1026 22:01:31.176082    3009 warnings.go:70] spec.configSource: deprecated in v1.22, support removal is planned in v1.23
... skipping 17 lines ...
---------------------------------------------------------
Received interrupt.  Running AfterSuite...
^C again to terminate immediately
I1026 22:02:11.267854    3009 e2e_node_suite_test.go:237] Stopping node services...
I1026 22:02:11.267868    3009 server.go:257] Kill server "services"
I1026 22:02:11.267882    3009 server.go:294] Killing process 4245 (services) with -TERM
E1026 22:02:11.317045    3009 services.go:95] Failed to stop services: error stopping "services": waitid: no child processes
I1026 22:02:11.317073    3009 server.go:257] Kill server "kubelet"
I1026 22:02:11.326972    3009 services.go:156] Fetching log files...
I1026 22:02:11.327055    3009 services.go:165] Get log file "kern.log" with journalctl command [-k].
I1026 22:02:11.341165    3009 services.go:165] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I1026 22:02:11.351280    3009 services.go:165] Get log file "docker.log" with journalctl command [-u docker].
I1026 22:02:11.359097    3009 services.go:165] Get log file "containerd.log" with journalctl command [-u containerd].
... skipping 2 lines ...

JUnit report was created: /tmp/node-e2e-20211026T211704/results/junit_ubuntu_01.xml


Summarizing 3 Failures:

[Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] update ConfigMap in-place: state transitions: status and events should match expectations 
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192

[Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations 
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192

[Fail] [sig-node] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive]  [BeforeEach] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations 
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/dynamic_kubelet_config_test.go:1192

Ran 24 of 215 Specs in 2699.905 seconds
FAIL! -- 21 Passed | 3 Failed | 1 Pending | 190 Skipped

Ginkgo ran 1 suite in 45m0.115607624s
Test Suite Failed

Failure Finished Test Suite on Host n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5
command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /root/.ssh/google_compute_engine prow@34.68.40.205 -- sudo sh -c 'cd /tmp/node-e2e-20211026T211704 && timeout -k 30s 2700.000000s ./ginkgo  -focus="\[Serial\]"  -skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]"  -untilItFails=false  ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-ubuntu-gke-2004-1-20-v20210401-c9dc75b5 --report-dir=/tmp/node-e2e-20211026T211704/results --report-prefix=ubuntu --image-description="ubuntu-gke-2004-1-20-v20210401" --kubelet-flags=--kernel-memcg-notification=true --kubelet-flags="--cluster-domain=cluster.local" --dns-domain="cluster.local" --feature-gates=DynamicKubeletConfig=true,LocalStorageCapacityIsolation=true --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/"'] failed with error: exit status 124
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
<                              FINISH TEST                               <
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<

Failure: 3 errors encountered.
exit status 1
make: *** [Makefile:271: test-e2e-node] Error 1
F1026 22:02:14.477748      49 node.go:260] failed to run ginkgo tester: exit status 2
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace