This job view page is being replaced by Spyglass soon. Check out the new job view.
PRpacoxu: [WIP] Fix fsquota bug to test
ResultFAILURE
Tests 1 failed / 46 succeeded
Started2022-09-21 10:15
Elapsed1h15m
Revision
Builder46c655a7-3996-11ed-a458-5a7c66696a7f
Refs master:6dbec8e2
112625:1935982b
infra-commit1b2a60909
job-versionv1.26.0-alpha.1.19+01b38b2638239a
kubetest-versionv20220916-c3af09ab20
repok8s.io/kubernetes
repo-commit01b38b2638239a81dcb75d62349e962dbacc0657
repos{u'k8s.io/kubernetes': u'master:6dbec8e25592d47fc8a8269c86d4b5fa838d320b,112625:1935982b6d2e79a3cae3e1cd9d80fa64fa57f8e9'}
revisionv1.26.0-alpha.1.19+01b38b2638239a

Test Failures


kubetest Node Tests 1h13m

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-079 --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--feature-gates=LocalStorageCapacityIsolation=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=7h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1-serial.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 46 Passed Tests

Show 341 Skipped Tests

Error lines from build-log.txt

... skipping 211 lines ...
W0921 10:18:18.704]       {
W0921 10:18:18.704]         "contents": "[Unit]\nDescription=Download and install dbus-tools.\nBefore=crio-install.service\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/rpm-ostree install --apply-live --allow-inactive dbus-tools\n\n[Install]\nWantedBy=multi-user.target\n",
W0921 10:18:18.704]         "enabled": true,
W0921 10:18:18.704]         "name": "dbus-tools-install.service"
W0921 10:18:18.705]       },
W0921 10:18:18.705]       {
W0921 10:18:18.705]         "contents": "[Unit]\nDescription=Download and install crio binaries and configurations.\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/bin/bash -c '/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -o /usr/local/crio-nodee2e-installer.sh  https://raw.githubusercontent.com/cri-o/cri-o/40cdd9c2d97384eb5601c1af28e7092cdda3815e/scripts/node_e2e_installer; ln -s /usr/bin/runc /usr/local/bin/runc'\nExecStart=/usr/bin/bash /usr/local/crio-nodee2e-installer.sh\n\n[Install]\nWantedBy=multi-user.target\n",
W0921 10:18:18.705]         "enabled": true,
W0921 10:18:18.705]         "name": "crio-install.service"
W0921 10:18:18.706]       }
W0921 10:18:18.706]     ]
W0921 10:18:18.706]   },
W0921 10:18:18.706]   "passwd": {
... skipping 15 lines ...
W0921 10:18:18.909] I0921 10:18:18.866917    7005 run_remote.go:597] Creating instance {image:fedora-coreos-36-20220906-3-0-gcp-x86-64 imageDesc:fedora-coreos-36-20220906-3-0-gcp-x86-64 kernelArguments:[] project:fedora-coreos-cloud resources:{Accelerators:[]} metadata:0xc0007d59d0 machine:n1-standard-2 tests:[]} with service account "148392433129-compute@developer.gserviceaccount.com"
I0921 10:18:20.091] +++ [0921 10:18:20] Building go targets for linux/amd64
I0921 10:18:20.112]     k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
I0921 10:18:34.015] +++ [0921 10:18:34] Building go targets for linux/amd64
I0921 10:18:34.035]     k8s.io/code-generator/cmd/prerelease-lifecycle-gen (non-static)
I0921 10:18:40.815] +++ [0921 10:18:40] Generating prerelease lifecycle code for 28 targets
W0921 10:18:41.471] I0921 10:18:41.470891    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
I0921 10:18:43.339] +++ [0921 10:18:43] Building go targets for linux/amd64
I0921 10:18:43.360]     k8s.io/code-generator/cmd/deepcopy-gen (non-static)
I0921 10:18:45.592] +++ [0921 10:18:45] Generating deepcopy code for 243 targets
I0921 10:18:53.014] +++ [0921 10:18:53] Building go targets for linux/amd64
I0921 10:18:53.034]     k8s.io/code-generator/cmd/defaulter-gen (non-static)
I0921 10:18:54.418] +++ [0921 10:18:54] Generating defaulter code for 96 targets
I0921 10:19:04.928] +++ [0921 10:19:04] Building go targets for linux/amd64
I0921 10:19:04.949]     k8s.io/code-generator/cmd/conversion-gen (non-static)
I0921 10:19:06.645] +++ [0921 10:19:06] Generating conversion code for 133 targets
I0921 10:19:27.611] +++ [0921 10:19:27] Building go targets for linux/amd64
I0921 10:19:27.633]     k8s.io/kube-openapi/cmd/openapi-gen (non-static)
I0921 10:19:41.169] +++ [0921 10:19:41] Generating openapi code for KUBE
W0921 10:19:47.607] E0921 10:19:47.607048    7005 ssh.go:123] failed to run SSH command: out: , err: exit status 1
I0921 10:19:56.202] +++ [0921 10:19:56] Generating openapi code for AGGREGATOR
I0921 10:19:57.766] +++ [0921 10:19:57] Generating openapi code for APIEXTENSIONS
I0921 10:19:59.567] +++ [0921 10:19:59] Generating openapi code for CODEGEN
I0921 10:20:01.115] +++ [0921 10:20:01] Generating openapi code for SAMPLEAPISERVER
I0921 10:20:02.608] make[1]: Leaving directory '/go/src/k8s.io/kubernetes'
I0921 10:20:02.967] +++ [0921 10:20:02] Building go targets for linux/amd64
I0921 10:20:02.988]     k8s.io/kubernetes/cmd/kubelet (non-static)
I0921 10:20:02.988]     k8s.io/kubernetes/test/e2e_node/e2e_node.test (test)
I0921 10:20:02.994]     github.com/onsi/ginkgo/v2/ginkgo (non-static)
I0921 10:20:02.999]     k8s.io/kubernetes/cluster/gce/gci/mounter (non-static)
I0921 10:20:03.005]     k8s.io/kubernetes/test/e2e_node/plugins/gcp-credential-provider (non-static)
W0921 10:20:07.816] I0921 10:20:07.816074    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
W0921 10:20:09.189] E0921 10:20:09.189569    7005 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0921 10:20:29.496] I0921 10:20:29.495449    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
W0921 10:20:30.921] E0921 10:20:30.921350    7005 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0921 10:20:51.127] I0921 10:20:51.126961    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
W0921 10:20:52.189] E0921 10:20:52.189644    7005 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0921 10:21:12.410] I0921 10:21:12.409711    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
W0921 10:21:13.823] E0921 10:21:13.823377    7005 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0921 10:21:34.089] I0921 10:21:34.089115    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
I0921 10:28:18.936] make: Leaving directory '/go/src/k8s.io/kubernetes'
W0921 10:28:32.840] I0921 10:28:32.840648    7005 remote.go:106] Staging test binaries on "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d"
W0921 10:28:32.841] I0921 10:28:32.840773    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- mkdir /tmp/node-e2e-20220921T102832]
W0921 10:28:33.760] I0921 10:28:33.759937    7005 ssh.go:120] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine /go/src/k8s.io/kubernetes/e2e_node_test.tar.gz prow@34.127.113.214:/tmp/node-e2e-20220921T102832/]
W0921 10:28:36.418] I0921 10:28:36.417763    7005 remote.go:133] Extracting tar on "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d"
W0921 10:28:36.418] I0921 10:28:36.417823    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sh -c 'cd /tmp/node-e2e-20220921T102832 && tar -xzvf ./e2e_node_test.tar.gz']
W0921 10:28:39.429] I0921 10:28:39.428656    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- mkdir /tmp/node-e2e-20220921T102832/results]
W0921 10:28:40.075] I0921 10:28:40.075231    7005 remote.go:148] Running test on "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d"
W0921 10:28:40.075] I0921 10:28:40.075279    7005 utils.go:66] Install CNI on "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d"
W0921 10:28:40.076] I0921 10:28:40.075336    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20220921T102832/cni/bin ; curl -s -L https://storage.googleapis.com/k8s-artifacts-cni/release/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz | tar -xz -C /tmp/node-e2e-20220921T102832/cni/bin']
W0921 10:28:41.780] I0921 10:28:41.780314    7005 utils.go:79] Adding CNI configuration on "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d"
W0921 10:28:41.781] I0921 10:28:41.780408    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20220921T102832/cni/net.d ; echo '"'"'{
W0921 10:28:41.781]   "name": "mynet",
W0921 10:28:41.781]   "type": "bridge",
W0921 10:28:41.781]   "bridge": "mynet0",
W0921 10:28:41.781]   "isDefaultGateway": true,
W0921 10:28:41.781]   "forceAddress": false,
W0921 10:28:41.781]   "ipMasq": true,
... skipping 2 lines ...
W0921 10:28:41.782]     "type": "host-local",
W0921 10:28:41.782]     "subnet": "10.10.0.0/16"
W0921 10:28:41.782]   }
W0921 10:28:41.782] }
W0921 10:28:41.782] '"'"' > /tmp/node-e2e-20220921T102832/cni/net.d/mynet.conf']
W0921 10:28:42.447] I0921 10:28:42.447483    7005 utils.go:106] Configure iptables firewall rules on "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d"
W0921 10:28:42.448] I0921 10:28:42.447582    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'iptables -I INPUT 1 -w -p tcp -j ACCEPT&&iptables -I INPUT 1 -w -p udp -j ACCEPT&&iptables -I INPUT 1 -w -p icmp -j ACCEPT&&iptables -I FORWARD 1 -w -p tcp -j ACCEPT&&iptables -I FORWARD 1 -w -p udp -j ACCEPT&&iptables -I FORWARD 1 -w -p icmp -j ACCEPT']
W0921 10:28:43.131] I0921 10:28:43.131632    7005 utils.go:92] Configuring kubelet credential provider on "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d"
W0921 10:28:43.132] I0921 10:28:43.131708    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'echo '"'"'kind: CredentialProviderConfig
W0921 10:28:43.132] apiVersion: kubelet.config.k8s.io/v1beta1
W0921 10:28:43.132] providers:
W0921 10:28:43.132]   - name: gcp-credential-provider
W0921 10:28:43.132]     apiVersion: credentialprovider.kubelet.k8s.io/v1beta1
W0921 10:28:43.132]     matchImages:
W0921 10:28:43.133]     - "gcr.io"
W0921 10:28:43.133]     - "*.gcr.io"
W0921 10:28:43.133]     - "container.cloud.google.com"
W0921 10:28:43.133]     - "*.pkg.dev"
W0921 10:28:43.133]     defaultCacheDuration: 1m'"'"' > /tmp/node-e2e-20220921T102832/credential-provider.yaml']
W0921 10:28:43.793] I0921 10:28:43.792758    7005 utils.go:127] Killing any existing node processes on "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d"
W0921 10:28:43.793] I0921 10:28:43.792837    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'pkill kubelet ; pkill kube-apiserver ; pkill etcd ; pkill e2e_node.test']
W0921 10:28:44.487] E0921 10:28:44.487150    7005 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0921 10:28:44.487] I0921 10:28:44.487230    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo cat /etc/os-release]
W0921 10:28:45.148] I0921 10:28:45.147677    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c '/usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20220921T102832/kubelet && /usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20220921T102832/e2e_node.test && /usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20220921T102832/ginkgo && /usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20220921T102832/mounter && /usr/bin/chcon -R -u system_u -r object_r -t bin_t /tmp/node-e2e-20220921T102832/cni/bin']
W0921 10:28:45.815] I0921 10:28:45.815132    7005 node_e2e.go:200] Starting tests on "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d"
W0921 10:28:45.816] I0921 10:28:45.815242    7005 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'cd /tmp/node-e2e-20220921T102832 && timeout -k 30s 25200.000000s ./ginkgo --nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --v 4 --node-name=n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --report-dir=/tmp/node-e2e-20220921T102832/results --report-prefix=fedora --image-description="fedora-coreos-36-20220906-3-0-gcp-x86-64" --feature-gates=LocalStorageCapacityIsolation=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"']
W0921 11:29:57.860] E0921 11:29:57.855987    7005 ssh.go:123] failed to run SSH command: out: W0921 10:28:46.570485    2625 test_context.go:471] Unable to find in-cluster config, using default host : https://127.0.0.1:6443
W0921 11:29:57.860] I0921 10:28:46.570596    2625 test_context.go:488] Tolerating taints "node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master" when considering if nodes are ready
W0921 11:29:57.861] Sep 21 10:28:46.570: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
W0921 11:29:57.861] W0921 10:28:46.570764    2625 feature_gate.go:237] Setting GA feature gate LocalStorageCapacityIsolation=true. It will be removed in a future release.
W0921 11:29:57.861] I0921 10:28:46.570803    2625 feature_gate.go:245] feature gates: &{map[LocalStorageCapacityIsolation:true]}
W0921 11:29:57.861] I0921 10:28:46.578462    2625 mount_linux.go:283] Detected umount with safe 'not mounted' behavior
W0921 11:29:57.861] I0921 10:28:46.580272    2625 mount_linux.go:283] Detected umount with safe 'not mounted' behavior
... skipping 65 lines ...
W0921 11:29:57.872] I0921 10:28:46.776431    2625 image_list.go:157] Pre-pulling images with CRI [docker.io/nfvpe/sriov-device-plugin:v3.1 gcr.io/cadvisor/cadvisor:v0.43.0 quay.io/kubevirt/device-plugin-kvm registry.k8s.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff registry.k8s.io/e2e-test-images/agnhost:2.40 registry.k8s.io/e2e-test-images/busybox:1.29-2 registry.k8s.io/e2e-test-images/httpd:2.4.38-2 registry.k8s.io/e2e-test-images/ipc-utils:1.3 registry.k8s.io/e2e-test-images/nginx:1.14-2 registry.k8s.io/e2e-test-images/node-perf/npb-ep:1.2 registry.k8s.io/e2e-test-images/node-perf/npb-is:1.2 registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep:1.2 registry.k8s.io/e2e-test-images/nonewprivs:1.3 registry.k8s.io/e2e-test-images/nonroot:1.2 registry.k8s.io/e2e-test-images/perl:5.26 registry.k8s.io/e2e-test-images/sample-device-plugin:1.3 registry.k8s.io/e2e-test-images/volume/gluster:1.3 registry.k8s.io/e2e-test-images/volume/nfs:1.3 registry.k8s.io/etcd:3.5.5-0 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.7 registry.k8s.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa registry.k8s.io/pause:3.8 registry.k8s.io/stress:v1]
W0921 11:29:57.873] I0921 10:30:45.797032    2625 e2e_node_suite_test.go:273] Locksmithd is masked successfully
W0921 11:29:57.874] I0921 10:30:45.797086    2625 server.go:102] Starting server "services" with command "/tmp/node-e2e-20220921T102832/e2e_node.test --run-services-mode --bearer-token=BIK8MvE3vxfRQ2pl --test.timeout=0 --ginkgo.seed=1663756126 --ginkgo.timeout=59m59.999906691s --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\] --ginkgo.parallel.process=1 --ginkgo.parallel.total=1 --ginkgo.slow-spec-threshold=5s --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --v 4 --node-name=n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --report-dir=/tmp/node-e2e-20220921T102832/results --report-prefix=fedora --image-description=fedora-coreos-36-20220906-3-0-gcp-x86-64 --feature-gates=LocalStorageCapacityIsolation=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags=--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --extra-log={\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"
W0921 11:29:57.874] I0921 10:30:45.797117    2625 util.go:48] Running readiness check for service "services"
W0921 11:29:57.874] I0921 10:30:45.797183    2625 server.go:130] Output file for server "services": /tmp/node-e2e-20220921T102832/results/services.log
W0921 11:29:57.875] I0921 10:30:45.798242    2625 server.go:160] Waiting for server "services" start command to complete
W0921 11:29:57.875] W0921 10:30:48.973178    2625 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
W0921 11:29:57.875] I0921 10:30:49.975989    2625 services.go:68] Node services started.
W0921 11:29:57.875] I0921 10:30:49.976873    2625 kubelet.go:154] Starting kubelet
W0921 11:29:57.876] I0921 10:30:49.993026    2625 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true -p StandardError=append:/tmp/node-e2e-20220921T102832/results/kubelet.log --unit=kubelet-20220921T102832.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service"
W0921 11:29:57.876] I0921 10:30:49.993153    2625 util.go:48] Running readiness check for service "kubelet"
W0921 11:29:57.877] I0921 10:30:49.993266    2625 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20220921T102832/results/kubelet.log
W0921 11:29:57.877] I0921 10:30:49.996369    2625 server.go:160] Waiting for server "kubelet" start command to complete
... skipping 21 lines ...
W0921 11:29:57.882]     I0921 10:28:46.776431    2625 image_list.go:157] Pre-pulling images with CRI [docker.io/nfvpe/sriov-device-plugin:v3.1 gcr.io/cadvisor/cadvisor:v0.43.0 quay.io/kubevirt/device-plugin-kvm registry.k8s.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff registry.k8s.io/e2e-test-images/agnhost:2.40 registry.k8s.io/e2e-test-images/busybox:1.29-2 registry.k8s.io/e2e-test-images/httpd:2.4.38-2 registry.k8s.io/e2e-test-images/ipc-utils:1.3 registry.k8s.io/e2e-test-images/nginx:1.14-2 registry.k8s.io/e2e-test-images/node-perf/npb-ep:1.2 registry.k8s.io/e2e-test-images/node-perf/npb-is:1.2 registry.k8s.io/e2e-test-images/node-perf/tf-wide-deep:1.2 registry.k8s.io/e2e-test-images/nonewprivs:1.3 registry.k8s.io/e2e-test-images/nonroot:1.2 registry.k8s.io/e2e-test-images/perl:5.26 registry.k8s.io/e2e-test-images/sample-device-plugin:1.3 registry.k8s.io/e2e-test-images/volume/gluster:1.3 registry.k8s.io/e2e-test-images/volume/nfs:1.3 registry.k8s.io/etcd:3.5.5-0 registry.k8s.io/node-problem-detector/node-problem-detector:v0.8.7 registry.k8s.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa registry.k8s.io/pause:3.8 registry.k8s.io/stress:v1]
W0921 11:29:57.882]     I0921 10:30:45.797032    2625 e2e_node_suite_test.go:273] Locksmithd is masked successfully
W0921 11:29:57.883]     I0921 10:30:45.797086    2625 server.go:102] Starting server "services" with command "/tmp/node-e2e-20220921T102832/e2e_node.test --run-services-mode --bearer-token=BIK8MvE3vxfRQ2pl --test.timeout=0 --ginkgo.seed=1663756126 --ginkgo.timeout=59m59.999906691s --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\] --ginkgo.parallel.process=1 --ginkgo.parallel.total=1 --ginkgo.slow-spec-threshold=5s --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --v 4 --node-name=n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --report-dir=/tmp/node-e2e-20220921T102832/results --report-prefix=fedora --image-description=fedora-coreos-36-20220906-3-0-gcp-x86-64 --feature-gates=LocalStorageCapacityIsolation=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags=--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --extra-log={\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"
W0921 11:29:57.883]     I0921 10:30:45.797117    2625 util.go:48] Running readiness check for service "services"
W0921 11:29:57.883]     I0921 10:30:45.797183    2625 server.go:130] Output file for server "services": /tmp/node-e2e-20220921T102832/results/services.log
W0921 11:29:57.883]     I0921 10:30:45.798242    2625 server.go:160] Waiting for server "services" start command to complete
W0921 11:29:57.884]     W0921 10:30:48.973178    2625 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
W0921 11:29:57.884]     I0921 10:30:49.975989    2625 services.go:68] Node services started.
W0921 11:29:57.884]     I0921 10:30:49.976873    2625 kubelet.go:154] Starting kubelet
W0921 11:29:57.885]     I0921 10:30:49.993026    2625 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true -p StandardError=append:/tmp/node-e2e-20220921T102832/results/kubelet.log --unit=kubelet-20220921T102832.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service"
W0921 11:29:57.885]     I0921 10:30:49.993153    2625 util.go:48] Running readiness check for service "kubelet"
W0921 11:29:57.885]     I0921 10:30:49.993266    2625 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20220921T102832/results/kubelet.log
W0921 11:29:57.885]     I0921 10:30:49.996369    2625 server.go:160] Waiting for server "kubelet" start command to complete
... skipping 26 lines ...
W0921 11:29:57.890] 
W0921 11:29:57.890] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:57.890] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:57.890] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:57.890] 1 loaded units listed.
W0921 11:29:57.890] , kubelet-20220921T102832
W0921 11:29:57.891] W0921 10:31:01.287314    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:43206->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:57.891] STEP: Starting the kubelet 09/21/22 10:31:01.296
W0921 11:29:57.891] W0921 10:31:01.331302    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:57.891] Sep 21 10:31:06.334: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.892] Sep 21 10:31:07.337: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.892] Sep 21 10:31:08.340: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.892] Sep 21 10:31:09.343: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.893] Sep 21 10:31:10.346: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.893] Sep 21 10:31:11.350: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 16 lines ...
W0921 11:29:57.896] 
W0921 11:29:57.897] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:57.897] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:57.897] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:57.897] 1 loaded units listed.
W0921 11:29:57.897] , kubelet-20220921T102832
W0921 11:29:57.898] W0921 10:31:12.461774    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41516->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:57.898] STEP: Starting the kubelet 09/21/22 10:31:12.47
W0921 11:29:57.898] W0921 10:31:12.502778    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:57.898] [DeferCleanup] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager]
W0921 11:29:57.898]   dump namespaces | framework.go:173
W0921 11:29:57.899] [DeferCleanup] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager]
W0921 11:29:57.899]   tear down framework | framework.go:170
W0921 11:29:57.899] Sep 21 10:31:17.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0921 11:29:57.899] STEP: Destroying namespace "memory-manager-test-2199" for this suite. 09/21/22 10:31:17.508
... skipping 24 lines ...
W0921 11:29:57.904] 
W0921 11:29:57.904]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:57.905]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:57.905]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:57.905]     1 loaded units listed.
W0921 11:29:57.905]     , kubelet-20220921T102832
W0921 11:29:57.905]     W0921 10:31:01.287314    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:43206->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:57.905]     STEP: Starting the kubelet 09/21/22 10:31:01.296
W0921 11:29:57.906]     W0921 10:31:01.331302    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:57.906]     Sep 21 10:31:06.334: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.906]     Sep 21 10:31:07.337: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.907]     Sep 21 10:31:08.340: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.907]     Sep 21 10:31:09.343: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.907]     Sep 21 10:31:10.346: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.908]     Sep 21 10:31:11.350: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 16 lines ...
W0921 11:29:57.912] 
W0921 11:29:57.912]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:57.912]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:57.912]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:57.912]     1 loaded units listed.
W0921 11:29:57.912]     , kubelet-20220921T102832
W0921 11:29:57.913]     W0921 10:31:12.461774    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41516->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:57.913]     STEP: Starting the kubelet 09/21/22 10:31:12.47
W0921 11:29:57.913]     W0921 10:31:12.502778    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:57.914]     [DeferCleanup] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager]
W0921 11:29:57.914]       dump namespaces | framework.go:173
W0921 11:29:57.914]     [DeferCleanup] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager]
W0921 11:29:57.914]       tear down framework | framework.go:170
W0921 11:29:57.914]     Sep 21 10:31:17.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
W0921 11:29:57.915]     STEP: Destroying namespace "memory-manager-test-2199" for this suite. 09/21/22 10:31:17.508
... skipping 21 lines ...
W0921 11:29:57.919] 
W0921 11:29:57.919] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:57.919] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:57.919] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:57.920] 1 loaded units listed.
W0921 11:29:57.920] , kubelet-20220921T102832
W0921 11:29:57.920] W0921 10:31:17.631549    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41534->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:57.920] STEP: Starting the kubelet 09/21/22 10:31:17.641
W0921 11:29:57.920] W0921 10:31:17.676382    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:57.921] Sep 21 10:31:22.683: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.921] Sep 21 10:31:23.685: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.921] Sep 21 10:31:24.688: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.922] Sep 21 10:31:25.690: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.922] Sep 21 10:31:26.693: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.922] Sep 21 10:31:27.696: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 63 lines ...
W0921 11:29:57.936] 
W0921 11:29:57.936] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:57.936] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:57.937] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:57.937] 1 loaded units listed.
W0921 11:29:57.937] , kubelet-20220921T102832
W0921 11:29:57.937] W0921 10:32:06.843437    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:39526->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:57.937] STEP: Starting the kubelet 09/21/22 10:32:06.852
W0921 11:29:57.938] W0921 10:32:06.888927    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:57.938] Sep 21 10:32:11.896: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.938] Sep 21 10:32:12.898: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.938] Sep 21 10:32:13.901: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.939] Sep 21 10:32:14.904: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.939] Sep 21 10:32:15.907: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.939] Sep 21 10:32:16.910: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 32 lines ...
W0921 11:29:57.945] 
W0921 11:29:57.945]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:57.945]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:57.946]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:57.946]     1 loaded units listed.
W0921 11:29:57.946]     , kubelet-20220921T102832
W0921 11:29:57.946]     W0921 10:31:17.631549    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41534->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:57.946]     STEP: Starting the kubelet 09/21/22 10:31:17.641
W0921 11:29:57.947]     W0921 10:31:17.676382    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:57.947]     Sep 21 10:31:22.683: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.947]     Sep 21 10:31:23.685: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.948]     Sep 21 10:31:24.688: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.948]     Sep 21 10:31:25.690: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.948]     Sep 21 10:31:26.693: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.949]     Sep 21 10:31:27.696: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 63 lines ...
W0921 11:29:57.961] 
W0921 11:29:57.961]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:57.961]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:57.961]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:57.961]     1 loaded units listed.
W0921 11:29:57.961]     , kubelet-20220921T102832
W0921 11:29:57.962]     W0921 10:32:06.843437    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:39526->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:57.962]     STEP: Starting the kubelet 09/21/22 10:32:06.852
W0921 11:29:57.962]     W0921 10:32:06.888927    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:57.962]     Sep 21 10:32:11.896: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.963]     Sep 21 10:32:12.898: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.963]     Sep 21 10:32:13.901: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.963]     Sep 21 10:32:14.904: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.963]     Sep 21 10:32:15.907: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:57.964]     Sep 21 10:32:16.910: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 1849 lines ...
W0921 11:29:58.336] 
W0921 11:29:58.336] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.336] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.337] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.337] 1 loaded units listed.
W0921 11:29:58.337] , kubelet-20220921T102832
W0921 11:29:58.337] W0921 10:37:30.127000    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:45824->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.337] STEP: Starting the kubelet 09/21/22 10:37:30.136
W0921 11:29:58.338] W0921 10:37:30.170262    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.338] Sep 21 10:37:35.177: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.338] Sep 21 10:37:36.180: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.339] Sep 21 10:37:37.183: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.339] Sep 21 10:37:38.187: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.339] Sep 21 10:37:39.189: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.340] Sep 21 10:37:40.192: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 17 lines ...
W0921 11:29:58.344] 
W0921 11:29:58.344] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.344] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.344] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.344] 1 loaded units listed.
W0921 11:29:58.344] , kubelet-20220921T102832
W0921 11:29:58.345] W0921 10:37:45.321520    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:33238->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.345] STEP: Starting the kubelet 09/21/22 10:37:45.331
W0921 11:29:58.345] W0921 10:37:45.366739    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.345] Sep 21 10:37:50.372: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.346] Sep 21 10:37:51.375: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.346] Sep 21 10:37:52.377: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.346] Sep 21 10:37:53.380: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.347] Sep 21 10:37:54.383: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.347] Sep 21 10:37:55.385: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 26 lines ...
W0921 11:29:58.352] 
W0921 11:29:58.352]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.352]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.352]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.352]     1 loaded units listed.
W0921 11:29:58.352]     , kubelet-20220921T102832
W0921 11:29:58.353]     W0921 10:37:30.127000    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:45824->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.353]     STEP: Starting the kubelet 09/21/22 10:37:30.136
W0921 11:29:58.353]     W0921 10:37:30.170262    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.354]     Sep 21 10:37:35.177: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.355]     Sep 21 10:37:36.180: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.355]     Sep 21 10:37:37.183: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.355]     Sep 21 10:37:38.187: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.356]     Sep 21 10:37:39.189: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.356]     Sep 21 10:37:40.192: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 17 lines ...
W0921 11:29:58.361] 
W0921 11:29:58.361]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.361]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.362]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.362]     1 loaded units listed.
W0921 11:29:58.362]     , kubelet-20220921T102832
W0921 11:29:58.362]     W0921 10:37:45.321520    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:33238->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.362]     STEP: Starting the kubelet 09/21/22 10:37:45.331
W0921 11:29:58.363]     W0921 10:37:45.366739    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.363]     Sep 21 10:37:50.372: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.364]     Sep 21 10:37:51.375: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.364]     Sep 21 10:37:52.377: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.364]     Sep 21 10:37:53.380: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.365]     Sep 21 10:37:54.383: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.365]     Sep 21 10:37:55.385: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 23 lines ...
W0921 11:29:58.370] 
W0921 11:29:58.370] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.370] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.370] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.371] 1 loaded units listed.
W0921 11:29:58.371] , kubelet-20220921T102832
W0921 11:29:58.371] W0921 10:37:56.504786    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.371] STEP: Starting the kubelet 09/21/22 10:37:56.511
W0921 11:29:58.371] W0921 10:37:56.548683    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.372] Sep 21 10:38:01.554: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.372] Sep 21 10:38:02.557: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.372] Sep 21 10:38:03.560: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.373] Sep 21 10:38:04.563: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.373] Sep 21 10:38:05.566: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.373] Sep 21 10:38:06.569: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.373] [It] should use the default seccomp profile when unspecified
W0921 11:29:58.373]   test/e2e_node/seccompdefault_test.go:61
W0921 11:29:58.374] STEP: Creating a pod to test SeccompDefault 09/21/22 10:38:07.571
W0921 11:29:58.374] Sep 21 10:38:07.580: INFO: Waiting up to 5m0s for pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01" in namespace "seccompdefault-test-7642" to be "Succeeded or Failed"
W0921 11:29:58.374] Sep 21 10:38:07.590: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057444ms
W0921 11:29:58.374] Sep 21 10:38:09.594: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014069298s
W0921 11:29:58.375] Sep 21 10:38:11.592: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012282412s
W0921 11:29:58.375] Sep 21 10:38:13.593: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013282864s
W0921 11:29:58.375] STEP: Saw pod success 09/21/22 10:38:13.594
W0921 11:29:58.375] Sep 21 10:38:13.594: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01" satisfied condition "Succeeded or Failed"
W0921 11:29:58.376] Sep 21 10:38:13.595: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01 container seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01: <nil>
W0921 11:29:58.376] STEP: delete the pod 09/21/22 10:38:13.604
W0921 11:29:58.376] Sep 21 10:38:13.609: INFO: Waiting for pod seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01 to disappear
W0921 11:29:58.376] Sep 21 10:38:13.610: INFO: Pod seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01 no longer exists
W0921 11:29:58.376] [AfterEach] with SeccompDefault enabled
W0921 11:29:58.376]   test/e2e_node/util.go:181
... skipping 3 lines ...
W0921 11:29:58.377] 
W0921 11:29:58.378] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.378] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.378] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.378] 1 loaded units listed.
W0921 11:29:58.378] , kubelet-20220921T102832
W0921 11:29:58.378] W0921 10:38:13.710499    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:48328->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.379] STEP: Starting the kubelet 09/21/22 10:38:13.72
W0921 11:29:58.379] W0921 10:38:13.755961    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.379] Sep 21 10:38:18.759: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.379] Sep 21 10:38:19.762: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.380] Sep 21 10:38:20.765: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.380] Sep 21 10:38:21.768: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.380] Sep 21 10:38:22.771: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.381] Sep 21 10:38:23.774: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 26 lines ...
W0921 11:29:58.385] 
W0921 11:29:58.386]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.386]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.386]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.386]     1 loaded units listed.
W0921 11:29:58.386]     , kubelet-20220921T102832
W0921 11:29:58.387]     W0921 10:37:56.504786    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.387]     STEP: Starting the kubelet 09/21/22 10:37:56.511
W0921 11:29:58.387]     W0921 10:37:56.548683    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.387]     Sep 21 10:38:01.554: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.388]     Sep 21 10:38:02.557: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.388]     Sep 21 10:38:03.560: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.388]     Sep 21 10:38:04.563: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.388]     Sep 21 10:38:05.566: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.389]     Sep 21 10:38:06.569: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.389]     [It] should use the default seccomp profile when unspecified
W0921 11:29:58.389]       test/e2e_node/seccompdefault_test.go:61
W0921 11:29:58.389]     STEP: Creating a pod to test SeccompDefault 09/21/22 10:38:07.571
W0921 11:29:58.389]     Sep 21 10:38:07.580: INFO: Waiting up to 5m0s for pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01" in namespace "seccompdefault-test-7642" to be "Succeeded or Failed"
W0921 11:29:58.390]     Sep 21 10:38:07.590: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057444ms
W0921 11:29:58.390]     Sep 21 10:38:09.594: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014069298s
W0921 11:29:58.390]     Sep 21 10:38:11.592: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012282412s
W0921 11:29:58.390]     Sep 21 10:38:13.593: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013282864s
W0921 11:29:58.391]     STEP: Saw pod success 09/21/22 10:38:13.594
W0921 11:29:58.391]     Sep 21 10:38:13.594: INFO: Pod "seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01" satisfied condition "Succeeded or Failed"
W0921 11:29:58.391]     Sep 21 10:38:13.595: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01 container seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01: <nil>
W0921 11:29:58.391]     STEP: delete the pod 09/21/22 10:38:13.604
W0921 11:29:58.391]     Sep 21 10:38:13.609: INFO: Waiting for pod seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01 to disappear
W0921 11:29:58.392]     Sep 21 10:38:13.610: INFO: Pod seccompdefault-test-ba5e114f-fc28-46c2-af3c-fc7e8cb50c01 no longer exists
W0921 11:29:58.392]     [AfterEach] with SeccompDefault enabled
W0921 11:29:58.392]       test/e2e_node/util.go:181
... skipping 3 lines ...
W0921 11:29:58.393] 
W0921 11:29:58.393]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.394]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.394]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.394]     1 loaded units listed.
W0921 11:29:58.394]     , kubelet-20220921T102832
W0921 11:29:58.394]     W0921 10:38:13.710499    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:48328->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.394]     STEP: Starting the kubelet 09/21/22 10:38:13.72
W0921 11:29:58.395]     W0921 10:38:13.755961    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.395]     Sep 21 10:38:18.759: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.395]     Sep 21 10:38:19.762: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.396]     Sep 21 10:38:20.765: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.396]     Sep 21 10:38:21.768: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.396]     Sep 21 10:38:22.771: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.397]     Sep 21 10:38:23.774: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 27 lines ...
W0921 11:29:58.402] 
W0921 11:29:58.402] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.402] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.402] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.402] 1 loaded units listed.
W0921 11:29:58.403] , kubelet-20220921T102832
W0921 11:29:58.403] W0921 10:38:25.038358    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:49668->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.403] STEP: Starting the kubelet 09/21/22 10:38:25.047
W0921 11:29:58.403] W0921 10:38:25.080654    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.404] [BeforeEach] when multiple guaranteed pods started
W0921 11:29:58.404]   test/e2e_node/memory_manager_test.go:483
W0921 11:29:58.404] [JustBeforeEach] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager]
W0921 11:29:58.404]   test/e2e_node/memory_manager_test.go:335
W0921 11:29:58.404] STEP: Waiting for hugepages resource to become available on the local node 09/21/22 10:38:30.086
W0921 11:29:58.404] [JustBeforeEach] when multiple guaranteed pods started
... skipping 104 lines ...
W0921 11:29:58.426] 
W0921 11:29:58.426] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.426] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.426] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.426] 1 loaded units listed.
W0921 11:29:58.427] , kubelet-20220921T102832
W0921 11:29:58.427] W0921 10:39:44.292420    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:50570->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.427] STEP: Starting the kubelet 09/21/22 10:39:44.301
W0921 11:29:58.427] W0921 10:39:44.336703    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.428] Sep 21 10:39:49.341: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.428] Sep 21 10:39:50.343: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.428] Sep 21 10:39:51.345: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.429] Sep 21 10:39:52.349: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.429] Sep 21 10:39:53.352: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.429] Sep 21 10:39:54.354: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 32 lines ...
W0921 11:29:58.436] 
W0921 11:29:58.436]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.436]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.437]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.437]     1 loaded units listed.
W0921 11:29:58.437]     , kubelet-20220921T102832
W0921 11:29:58.437]     W0921 10:38:25.038358    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:49668->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.437]     STEP: Starting the kubelet 09/21/22 10:38:25.047
W0921 11:29:58.438]     W0921 10:38:25.080654    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.438]     [BeforeEach] when multiple guaranteed pods started
W0921 11:29:58.438]       test/e2e_node/memory_manager_test.go:483
W0921 11:29:58.438]     [JustBeforeEach] [sig-node] Memory Manager [Disruptive] [Serial] [Feature:MemoryManager]
W0921 11:29:58.438]       test/e2e_node/memory_manager_test.go:335
W0921 11:29:58.439]     STEP: Waiting for hugepages resource to become available on the local node 09/21/22 10:38:30.086
W0921 11:29:58.439]     [JustBeforeEach] when multiple guaranteed pods started
... skipping 104 lines ...
W0921 11:29:58.458] 
W0921 11:29:58.458]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.459]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.459]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.459]     1 loaded units listed.
W0921 11:29:58.459]     , kubelet-20220921T102832
W0921 11:29:58.459]     W0921 10:39:44.292420    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:50570->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.459]     STEP: Starting the kubelet 09/21/22 10:39:44.301
W0921 11:29:58.460]     W0921 10:39:44.336703    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.460]     Sep 21 10:39:49.341: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.460]     Sep 21 10:39:50.343: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.461]     Sep 21 10:39:51.345: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.461]     Sep 21 10:39:52.349: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.461]     Sep 21 10:39:53.352: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.462]     Sep 21 10:39:54.354: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 422 lines ...
W0921 11:29:58.537] 
W0921 11:29:58.537] Sep 21 10:54:37.361: INFO: Dumping perf data for test "resource_10" to "/tmp/node-e2e-20220921T102832/results/performance-memory-fedora-resource_10.json".
W0921 11:29:58.538] Sep 21 10:54:37.362: INFO: Dumping perf data for test "resource_10" to "/tmp/node-e2e-20220921T102832/results/performance-cpu-fedora-resource_10.json".
W0921 11:29:58.538] [AfterEach] [sig-node] Resource-usage [Serial] [Slow]
W0921 11:29:58.538]   test/e2e_node/resource_usage_test.go:62
W0921 11:29:58.538] W0921 10:54:37.363535    2625 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
W0921 11:29:58.538] Sep 21 10:54:37.382: INFO: runtime operation error metrics:
W0921 11:29:58.538] node "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d" runtime operation error rate:
W0921 11:29:58.538] 
W0921 11:29:58.538] 
W0921 11:29:58.539] [DeferCleanup] [sig-node] Resource-usage [Serial] [Slow]
W0921 11:29:58.539]   dump namespaces | framework.go:173
W0921 11:29:58.539] [DeferCleanup] [sig-node] Resource-usage [Serial] [Slow]
W0921 11:29:58.539]   tear down framework | framework.go:170
... skipping 209 lines ...
W0921 11:29:58.584] 
W0921 11:29:58.584]     Sep 21 10:54:37.361: INFO: Dumping perf data for test "resource_10" to "/tmp/node-e2e-20220921T102832/results/performance-memory-fedora-resource_10.json".
W0921 11:29:58.585]     Sep 21 10:54:37.362: INFO: Dumping perf data for test "resource_10" to "/tmp/node-e2e-20220921T102832/results/performance-cpu-fedora-resource_10.json".
W0921 11:29:58.585]     [AfterEach] [sig-node] Resource-usage [Serial] [Slow]
W0921 11:29:58.585]       test/e2e_node/resource_usage_test.go:62
W0921 11:29:58.585]     W0921 10:54:37.363535    2625 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
W0921 11:29:58.585]     Sep 21 10:54:37.382: INFO: runtime operation error metrics:
W0921 11:29:58.585]     node "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d" runtime operation error rate:
W0921 11:29:58.585] 
W0921 11:29:58.586] 
W0921 11:29:58.586]     [DeferCleanup] [sig-node] Resource-usage [Serial] [Slow]
W0921 11:29:58.586]       dump namespaces | framework.go:173
W0921 11:29:58.586]     [DeferCleanup] [sig-node] Resource-usage [Serial] [Slow]
W0921 11:29:58.586]       tear down framework | framework.go:170
... skipping 635 lines ...
W0921 11:29:58.715] 
W0921 11:29:58.715] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.715] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.715] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.715] 1 loaded units listed.
W0921 11:29:58.715] , kubelet-20220921T102832
W0921 11:29:58.716] W0921 10:57:09.645173    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.716] STEP: Starting the kubelet 09/21/22 10:57:09.651
W0921 11:29:58.716] W0921 10:57:09.686129    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.716] Sep 21 10:57:14.689: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.716] Sep 21 10:57:15.692: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.717] Sep 21 10:57:16.695: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.717] Sep 21 10:57:17.698: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.717] Sep 21 10:57:18.701: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.718] Sep 21 10:57:19.704: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 31 lines ...
W0921 11:29:58.722] 
W0921 11:29:58.723]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.723]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.723]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.723]     1 loaded units listed.
W0921 11:29:58.723]     , kubelet-20220921T102832
W0921 11:29:58.723]     W0921 10:57:09.645173    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.724]     STEP: Starting the kubelet 09/21/22 10:57:09.651
W0921 11:29:58.724]     W0921 10:57:09.686129    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.724]     Sep 21 10:57:14.689: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.724]     Sep 21 10:57:15.692: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.725]     Sep 21 10:57:16.695: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.725]     Sep 21 10:57:17.698: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.725]     Sep 21 10:57:18.701: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.725]     Sep 21 10:57:19.704: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 18 lines ...
W0921 11:29:58.728] STEP: Creating a kubernetes client 09/21/22 10:57:20.711
W0921 11:29:58.729] STEP: Building a namespace api object, basename downward-api 09/21/22 10:57:20.711
W0921 11:29:58.729] Sep 21 10:57:20.715: INFO: Skipping waiting for service account
W0921 11:29:58.729] [It] should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
W0921 11:29:58.729]   test/e2e/common/node/downwardapi.go:293
W0921 11:29:58.729] STEP: Creating a pod to test downward api env vars 09/21/22 10:57:20.715
W0921 11:29:58.729] Sep 21 10:57:20.724: INFO: Waiting up to 5m0s for pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2" in namespace "downward-api-111" to be "Succeeded or Failed"
W0921 11:29:58.730] Sep 21 10:57:20.730: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.79054ms
W0921 11:29:58.730] Sep 21 10:57:22.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008697164s
W0921 11:29:58.730] Sep 21 10:57:24.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008304233s
W0921 11:29:58.730] Sep 21 10:57:26.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.008236988s
W0921 11:29:58.731] STEP: Saw pod success 09/21/22 10:57:26.733
W0921 11:29:58.731] Sep 21 10:57:26.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2" satisfied condition "Succeeded or Failed"
W0921 11:29:58.731] Sep 21 10:57:26.734: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 container dapi-container: <nil>
W0921 11:29:58.731] STEP: delete the pod 09/21/22 10:57:26.743
W0921 11:29:58.731] Sep 21 10:57:26.746: INFO: Waiting for pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 to disappear
W0921 11:29:58.732] Sep 21 10:57:26.747: INFO: Pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 no longer exists
W0921 11:29:58.732] [DeferCleanup] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages]
W0921 11:29:58.732]   dump namespaces | framework.go:173
... skipping 16 lines ...
W0921 11:29:58.735]     STEP: Creating a kubernetes client 09/21/22 10:57:20.711
W0921 11:29:58.735]     STEP: Building a namespace api object, basename downward-api 09/21/22 10:57:20.711
W0921 11:29:58.735]     Sep 21 10:57:20.715: INFO: Skipping waiting for service account
W0921 11:29:58.735]     [It] should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
W0921 11:29:58.735]       test/e2e/common/node/downwardapi.go:293
W0921 11:29:58.736]     STEP: Creating a pod to test downward api env vars 09/21/22 10:57:20.715
W0921 11:29:58.736]     Sep 21 10:57:20.724: INFO: Waiting up to 5m0s for pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2" in namespace "downward-api-111" to be "Succeeded or Failed"
W0921 11:29:58.736]     Sep 21 10:57:20.730: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.79054ms
W0921 11:29:58.736]     Sep 21 10:57:22.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008697164s
W0921 11:29:58.736]     Sep 21 10:57:24.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008304233s
W0921 11:29:58.737]     Sep 21 10:57:26.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.008236988s
W0921 11:29:58.737]     STEP: Saw pod success 09/21/22 10:57:26.733
W0921 11:29:58.737]     Sep 21 10:57:26.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2" satisfied condition "Succeeded or Failed"
W0921 11:29:58.737]     Sep 21 10:57:26.734: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 container dapi-container: <nil>
W0921 11:29:58.738]     STEP: delete the pod 09/21/22 10:57:26.743
W0921 11:29:58.738]     Sep 21 10:57:26.746: INFO: Waiting for pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 to disappear
W0921 11:29:58.738]     Sep 21 10:57:26.747: INFO: Pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 no longer exists
W0921 11:29:58.738]     [DeferCleanup] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages]
W0921 11:29:58.738]       dump namespaces | framework.go:173
... skipping 216 lines ...
W0921 11:29:58.775] 
W0921 11:29:58.776] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.776] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.776] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.776] 1 loaded units listed.
W0921 11:29:58.776] , kubelet-20220921T102832
W0921 11:29:58.777] W0921 10:58:04.911654    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:34284->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.777] STEP: Starting the kubelet 09/21/22 10:58:04.918
W0921 11:29:58.777] W0921 10:58:04.953863    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.777] Sep 21 10:58:09.957: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.778] Sep 21 10:58:10.960: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.778] Sep 21 10:58:11.962: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.778] Sep 21 10:58:12.966: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.779] Sep 21 10:58:13.969: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.779] Sep 21 10:58:14.971: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.779] [It] should set pids.max for Pod
W0921 11:29:58.779]   test/e2e_node/pids_test.go:90
W0921 11:29:58.779] STEP: by creating a G pod 09/21/22 10:58:15.974
W0921 11:29:58.780] STEP: checking if the expected pids settings were applied 09/21/22 10:58:15.983
W0921 11:29:58.780] Sep 21 10:58:15.983: INFO: Pod to run command: expected=1024; actual=$(cat /tmp/pids//kubepods.slice/kubepods-pod8956d093_cc6b_4a26_ac39_e5b4ec75abd9.slice/pids.max); if [ "$expected" -ne "$actual" ]; then exit 1; fi; 
W0921 11:29:58.780] Sep 21 10:58:15.990: INFO: Waiting up to 5m0s for pod "pod67a20102-923e-4072-b567-94158fdd8549" in namespace "pids-limit-test-4001" to be "Succeeded or Failed"
W0921 11:29:58.780] Sep 21 10:58:15.996: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 5.673992ms
W0921 11:29:58.781] Sep 21 10:58:18.004: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013958674s
W0921 11:29:58.781] Sep 21 10:58:19.999: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009319846s
W0921 11:29:58.781] Sep 21 10:58:21.998: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.008089329s
W0921 11:29:58.781] STEP: Saw pod success 09/21/22 10:58:21.998
W0921 11:29:58.782] Sep 21 10:58:21.998: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549" satisfied condition "Succeeded or Failed"
W0921 11:29:58.782] [AfterEach] With config updated with pids limits
W0921 11:29:58.782]   test/e2e_node/util.go:181
W0921 11:29:58.782] STEP: Stopping the kubelet 09/21/22 10:58:21.998
W0921 11:29:58.782] Sep 21 10:58:22.033: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
W0921 11:29:58.783]   kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
W0921 11:29:58.783] 
W0921 11:29:58.783] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.783] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.783] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.783] 1 loaded units listed.
W0921 11:29:58.784] , kubelet-20220921T102832
W0921 11:29:58.784] W0921 10:58:22.097319    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:56886->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.784] STEP: Starting the kubelet 09/21/22 10:58:22.106
W0921 11:29:58.784] W0921 10:58:22.144390    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.785] Sep 21 10:58:27.147: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.785] Sep 21 10:58:28.150: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.785] Sep 21 10:58:29.153: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.786] Sep 21 10:58:30.155: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.786] Sep 21 10:58:31.158: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.786] Sep 21 10:58:32.162: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 26 lines ...
W0921 11:29:58.791] 
W0921 11:29:58.791]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.791]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.791]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.791]     1 loaded units listed.
W0921 11:29:58.791]     , kubelet-20220921T102832
W0921 11:29:58.792]     W0921 10:58:04.911654    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:34284->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.792]     STEP: Starting the kubelet 09/21/22 10:58:04.918
W0921 11:29:58.792]     W0921 10:58:04.953863    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.792]     Sep 21 10:58:09.957: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.793]     Sep 21 10:58:10.960: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.793]     Sep 21 10:58:11.962: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.793]     Sep 21 10:58:12.966: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.793]     Sep 21 10:58:13.969: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.794]     Sep 21 10:58:14.971: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.794]     [It] should set pids.max for Pod
W0921 11:29:58.794]       test/e2e_node/pids_test.go:90
W0921 11:29:58.794]     STEP: by creating a G pod 09/21/22 10:58:15.974
W0921 11:29:58.794]     STEP: checking if the expected pids settings were applied 09/21/22 10:58:15.983
W0921 11:29:58.795]     Sep 21 10:58:15.983: INFO: Pod to run command: expected=1024; actual=$(cat /tmp/pids//kubepods.slice/kubepods-pod8956d093_cc6b_4a26_ac39_e5b4ec75abd9.slice/pids.max); if [ "$expected" -ne "$actual" ]; then exit 1; fi; 
W0921 11:29:58.795]     Sep 21 10:58:15.990: INFO: Waiting up to 5m0s for pod "pod67a20102-923e-4072-b567-94158fdd8549" in namespace "pids-limit-test-4001" to be "Succeeded or Failed"
W0921 11:29:58.795]     Sep 21 10:58:15.996: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 5.673992ms
W0921 11:29:58.795]     Sep 21 10:58:18.004: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013958674s
W0921 11:29:58.796]     Sep 21 10:58:19.999: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009319846s
W0921 11:29:58.796]     Sep 21 10:58:21.998: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.008089329s
W0921 11:29:58.796]     STEP: Saw pod success 09/21/22 10:58:21.998
W0921 11:29:58.796]     Sep 21 10:58:21.998: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549" satisfied condition "Succeeded or Failed"
W0921 11:29:58.796]     [AfterEach] With config updated with pids limits
W0921 11:29:58.797]       test/e2e_node/util.go:181
W0921 11:29:58.797]     STEP: Stopping the kubelet 09/21/22 10:58:21.998
W0921 11:29:58.797]     Sep 21 10:58:22.033: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
W0921 11:29:58.798]       kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
W0921 11:29:58.798] 
W0921 11:29:58.798]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.798]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.798]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.798]     1 loaded units listed.
W0921 11:29:58.799]     , kubelet-20220921T102832
W0921 11:29:58.799]     W0921 10:58:22.097319    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:56886->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.799]     STEP: Starting the kubelet 09/21/22 10:58:22.106
W0921 11:29:58.799]     W0921 10:58:22.144390    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.800]     Sep 21 10:58:27.147: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.800]     Sep 21 10:58:28.150: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.800]     Sep 21 10:58:29.153: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.801]     Sep 21 10:58:30.155: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.801]     Sep 21 10:58:31.158: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.801]     Sep 21 10:58:32.162: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 79 lines ...
W0921 11:29:58.816] 
W0921 11:29:58.817] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.817] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.817] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.817] 1 loaded units listed.
W0921 11:29:58.817] , kubelet-20220921T102832
W0921 11:29:58.818] W0921 10:58:33.305530    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:60742->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.818] STEP: Starting the kubelet 09/21/22 10:58:33.314
W0921 11:29:58.818] W0921 10:58:33.347685    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.819] Sep 21 10:58:38.354: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.819] Sep 21 10:58:39.356: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.819] Sep 21 10:58:40.359: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.820] Sep 21 10:58:41.361: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.820] Sep 21 10:58:42.364: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.821] Sep 21 10:58:43.367: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 24 lines ...
W0921 11:29:58.826] STEP: Waiting for evictions to occur 09/21/22 10:59:18.446
W0921 11:29:58.826] Sep 21 10:59:18.459: INFO: Kubelet Metrics: []
W0921 11:29:58.826] Sep 21 10:59:18.469: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.827] Sep 21 10:59:18.469: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.827] Sep 21 10:59:18.471: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.827] Sep 21 10:59:18.471: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.827] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:18.472
W0921 11:29:58.827] Sep 21 10:59:20.485: INFO: Kubelet Metrics: []
W0921 11:29:58.828] Sep 21 10:59:20.496: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.828] Sep 21 10:59:20.496: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.828] Sep 21 10:59:20.498: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.828] Sep 21 10:59:20.498: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.828] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:20.498
W0921 11:29:58.829] Sep 21 10:59:22.523: INFO: Kubelet Metrics: []
W0921 11:29:58.829] Sep 21 10:59:22.535: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.829] Sep 21 10:59:22.535: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.829] Sep 21 10:59:22.537: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.829] Sep 21 10:59:22.537: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.830] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:22.537
W0921 11:29:58.830] Sep 21 10:59:24.548: INFO: Kubelet Metrics: []
W0921 11:29:58.830] Sep 21 10:59:24.559: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.830] Sep 21 10:59:24.560: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.830] Sep 21 10:59:24.562: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.831] Sep 21 10:59:24.562: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.831] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:24.562
W0921 11:29:58.831] Sep 21 10:59:26.574: INFO: Kubelet Metrics: []
W0921 11:29:58.831] Sep 21 10:59:26.592: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.831] Sep 21 10:59:26.592: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.832] Sep 21 10:59:26.595: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.832] Sep 21 10:59:26.595: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.832] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:26.595
W0921 11:29:58.832] Sep 21 10:59:28.606: INFO: Kubelet Metrics: []
W0921 11:29:58.833] Sep 21 10:59:28.619: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.833] Sep 21 10:59:28.619: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.833] Sep 21 10:59:28.619: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.833] Sep 21 10:59:28.619: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.833] Sep 21 10:59:28.619: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.834] Sep 21 10:59:28.621: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.834] Sep 21 10:59:28.621: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.834] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:28.621
W0921 11:29:58.834] Sep 21 10:59:30.635: INFO: Kubelet Metrics: []
W0921 11:29:58.834] Sep 21 10:59:30.661: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.834] Sep 21 10:59:30.661: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.835] Sep 21 10:59:30.661: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.835] Sep 21 10:59:30.661: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.835] Sep 21 10:59:30.661: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.835] Sep 21 10:59:30.664: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.835] Sep 21 10:59:30.664: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.836] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:30.664
W0921 11:29:58.836] Sep 21 10:59:32.677: INFO: Kubelet Metrics: []
W0921 11:29:58.836] Sep 21 10:59:32.690: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.836] Sep 21 10:59:32.690: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.836] Sep 21 10:59:32.690: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.837] Sep 21 10:59:32.690: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.837] Sep 21 10:59:32.690: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.837] Sep 21 10:59:32.692: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.837] Sep 21 10:59:32.692: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.837] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:32.692
W0921 11:29:58.838] Sep 21 10:59:34.704: INFO: Kubelet Metrics: []
W0921 11:29:58.838] Sep 21 10:59:34.717: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.838] Sep 21 10:59:34.717: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.838] Sep 21 10:59:34.717: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.838] Sep 21 10:59:34.717: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.839] Sep 21 10:59:34.717: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.839] Sep 21 10:59:34.717: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.839] Sep 21 10:59:34.717: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.839] Sep 21 10:59:34.720: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.839] Sep 21 10:59:34.720: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.840] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:34.72
W0921 11:29:58.840] Sep 21 10:59:36.738: INFO: Kubelet Metrics: []
W0921 11:29:58.840] Sep 21 10:59:36.756: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.840] Sep 21 10:59:36.756: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.840] Sep 21 10:59:36.756: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.841] Sep 21 10:59:36.756: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.841] Sep 21 10:59:36.756: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.841] Sep 21 10:59:36.756: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.841] Sep 21 10:59:36.756: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.841] Sep 21 10:59:36.759: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.841] Sep 21 10:59:36.760: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.842] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:36.76
W0921 11:29:58.842] Sep 21 10:59:38.774: INFO: Kubelet Metrics: []
W0921 11:29:58.842] Sep 21 10:59:38.798: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.842] Sep 21 10:59:38.798: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.842] Sep 21 10:59:38.798: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.843] Sep 21 10:59:38.798: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.843] Sep 21 10:59:38.798: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.843] Sep 21 10:59:38.798: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.843] Sep 21 10:59:38.798: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.843] Sep 21 10:59:38.800: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.843] Sep 21 10:59:38.800: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.844] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:38.8
W0921 11:29:58.844] Sep 21 10:59:40.811: INFO: Kubelet Metrics: []
W0921 11:29:58.844] Sep 21 10:59:40.823: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.844] Sep 21 10:59:40.823: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.844] Sep 21 10:59:40.823: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.845] Sep 21 10:59:40.823: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.845] Sep 21 10:59:40.823: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.845] Sep 21 10:59:40.823: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.845] Sep 21 10:59:40.823: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.845] Sep 21 10:59:40.825: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.845] Sep 21 10:59:40.825: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.846] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:40.826
W0921 11:29:58.846] Sep 21 10:59:42.839: INFO: Kubelet Metrics: []
W0921 11:29:58.846] Sep 21 10:59:42.852: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.846] Sep 21 10:59:42.852: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.846] Sep 21 10:59:42.852: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.847] Sep 21 10:59:42.852: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.847] Sep 21 10:59:42.852: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.847] Sep 21 10:59:42.852: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.847] Sep 21 10:59:42.852: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.847] Sep 21 10:59:42.854: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.847] Sep 21 10:59:42.854: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.848] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:42.854
W0921 11:29:58.848] Sep 21 10:59:44.867: INFO: Kubelet Metrics: []
W0921 11:29:58.848] Sep 21 10:59:44.878: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.848] Sep 21 10:59:44.878: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.848] Sep 21 10:59:44.878: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.849] Sep 21 10:59:44.878: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.849] Sep 21 10:59:44.878: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.849] Sep 21 10:59:44.878: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.849] Sep 21 10:59:44.878: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.849] Sep 21 10:59:44.880: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.849] Sep 21 10:59:44.880: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.850] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:44.88
W0921 11:29:58.850] Sep 21 10:59:46.892: INFO: Kubelet Metrics: []
W0921 11:29:58.850] Sep 21 10:59:46.910: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.850] Sep 21 10:59:46.910: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.850] Sep 21 10:59:46.910: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.850] Sep 21 10:59:46.910: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.851] Sep 21 10:59:46.910: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.851] Sep 21 10:59:46.910: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.851] Sep 21 10:59:46.910: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.851] Sep 21 10:59:46.913: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.851] Sep 21 10:59:46.913: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.851] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:46.913
W0921 11:29:58.851] Sep 21 10:59:48.925: INFO: Kubelet Metrics: []
W0921 11:29:58.852] Sep 21 10:59:48.938: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.852] Sep 21 10:59:48.939: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.852] Sep 21 10:59:48.939: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.852] Sep 21 10:59:48.939: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.852] Sep 21 10:59:48.939: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.852] Sep 21 10:59:48.939: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.853] Sep 21 10:59:48.939: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.853] Sep 21 10:59:48.941: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.853] Sep 21 10:59:48.941: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.853] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:48.941
W0921 11:29:58.853] Sep 21 10:59:50.959: INFO: Kubelet Metrics: []
W0921 11:29:58.853] Sep 21 10:59:50.976: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.854] Sep 21 10:59:50.976: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.854] Sep 21 10:59:50.976: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.854] Sep 21 10:59:50.976: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.854] Sep 21 10:59:50.976: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.854] Sep 21 10:59:50.976: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.854] Sep 21 10:59:50.976: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.855] Sep 21 10:59:50.979: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.855] Sep 21 10:59:50.979: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.855] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:50.979
W0921 11:29:58.855] Sep 21 10:59:52.991: INFO: Kubelet Metrics: []
W0921 11:29:58.855] Sep 21 10:59:53.003: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.855] Sep 21 10:59:53.003: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.856] Sep 21 10:59:53.003: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.856] Sep 21 10:59:53.003: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.856] Sep 21 10:59:53.003: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.856] Sep 21 10:59:53.003: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.856] Sep 21 10:59:53.003: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.857] Sep 21 10:59:53.005: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.857] Sep 21 10:59:53.005: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.857] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:53.005
W0921 11:29:58.857] Sep 21 10:59:55.019: INFO: Kubelet Metrics: []
W0921 11:29:58.857] Sep 21 10:59:55.031: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.858] Sep 21 10:59:55.031: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.858] Sep 21 10:59:55.031: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.858] Sep 21 10:59:55.031: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.858] Sep 21 10:59:55.031: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.858] Sep 21 10:59:55.031: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.858] Sep 21 10:59:55.031: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.859] Sep 21 10:59:55.033: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.859] Sep 21 10:59:55.033: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.859] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:55.033
W0921 11:29:58.859] Sep 21 10:59:57.056: INFO: Kubelet Metrics: []
W0921 11:29:58.859] Sep 21 10:59:57.073: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.860] Sep 21 10:59:57.074: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.860] Sep 21 10:59:57.074: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.860] Sep 21 10:59:57.074: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.860] Sep 21 10:59:57.074: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.860] Sep 21 10:59:57.074: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.861] Sep 21 10:59:57.074: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.861] Sep 21 10:59:57.076: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.861] Sep 21 10:59:57.076: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.861] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:57.076
W0921 11:29:58.861] Sep 21 10:59:59.087: INFO: Kubelet Metrics: []
W0921 11:29:58.862] Sep 21 10:59:59.098: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.862] Sep 21 10:59:59.098: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.862] Sep 21 10:59:59.098: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.862] Sep 21 10:59:59.098: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.862] Sep 21 10:59:59.098: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.863] Sep 21 10:59:59.098: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.863] Sep 21 10:59:59.098: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.863] Sep 21 10:59:59.101: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.863] Sep 21 10:59:59.101: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.863] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:59.101
W0921 11:29:58.863] Sep 21 11:00:01.114: INFO: Kubelet Metrics: []
W0921 11:29:58.864] Sep 21 11:00:01.126: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.864] Sep 21 11:00:01.126: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.864] Sep 21 11:00:01.126: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.864] Sep 21 11:00:01.126: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.864] Sep 21 11:00:01.126: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.864] Sep 21 11:00:01.126: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.865] Sep 21 11:00:01.126: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.865] Sep 21 11:00:01.129: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.865] Sep 21 11:00:01.129: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.865] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:01.129
W0921 11:29:58.865] Sep 21 11:00:03.143: INFO: Kubelet Metrics: []
W0921 11:29:58.865] Sep 21 11:00:03.155: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.866] Sep 21 11:00:03.155: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.866] Sep 21 11:00:03.155: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.866] Sep 21 11:00:03.155: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.866] Sep 21 11:00:03.155: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.866] Sep 21 11:00:03.155: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.866] Sep 21 11:00:03.155: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.867] Sep 21 11:00:03.157: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.867] Sep 21 11:00:03.157: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.867] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:03.157
W0921 11:29:58.867] Sep 21 11:00:05.175: INFO: Kubelet Metrics: []
W0921 11:29:58.867] Sep 21 11:00:05.194: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.867] Sep 21 11:00:05.194: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.868] Sep 21 11:00:05.194: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.868] Sep 21 11:00:05.194: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.868] Sep 21 11:00:05.194: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.868] Sep 21 11:00:05.194: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.868] Sep 21 11:00:05.194: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.868] Sep 21 11:00:05.197: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.869] Sep 21 11:00:05.197: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.869] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:05.197
W0921 11:29:58.869] Sep 21 11:00:07.218: INFO: Kubelet Metrics: []
W0921 11:29:58.869] Sep 21 11:00:07.238: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.869] Sep 21 11:00:07.238: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.869] Sep 21 11:00:07.238: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.870] Sep 21 11:00:07.238: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.870] Sep 21 11:00:07.238: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.870] Sep 21 11:00:07.241: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.870] Sep 21 11:00:07.241: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.870] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:07.241
W0921 11:29:58.870] STEP: making sure pressure from test has surfaced before continuing 09/21/22 11:00:07.241
W0921 11:29:58.871] STEP: Waiting for NodeCondition: NoPressure to no longer exist on the node 09/21/22 11:00:27.241
W0921 11:29:58.871] Sep 21 11:00:27.253: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.871] Sep 21 11:00:27.253: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.871] Sep 21 11:00:27.253: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.871] Sep 21 11:00:27.253: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
... skipping 3 lines ...
W0921 11:29:58.872] Sep 21 11:00:27.276: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.872] Sep 21 11:00:27.276: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.872] Sep 21 11:00:27.276: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.873] Sep 21 11:00:27.276: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.873] Sep 21 11:00:27.276: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.873] Sep 21 11:00:27.286: INFO: Kubelet Metrics: []
W0921 11:29:58.873] Sep 21 11:00:27.289: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.873] Sep 21 11:00:27.289: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.874] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:27.289
W0921 11:29:58.874] Sep 21 11:00:29.302: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.874] Sep 21 11:00:29.302: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.874] Sep 21 11:00:29.302: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.874] Sep 21 11:00:29.302: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.874] Sep 21 11:00:29.302: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.875] Sep 21 11:00:29.312: INFO: Kubelet Metrics: []
W0921 11:29:58.875] Sep 21 11:00:29.315: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.875] Sep 21 11:00:29.315: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.875] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:29.315
W0921 11:29:58.875] Sep 21 11:00:31.331: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.876] Sep 21 11:00:31.331: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.876] Sep 21 11:00:31.331: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.876] Sep 21 11:00:31.331: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.876] Sep 21 11:00:31.331: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.876] Sep 21 11:00:31.341: INFO: Kubelet Metrics: []
W0921 11:29:58.876] Sep 21 11:00:31.343: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.877] Sep 21 11:00:31.344: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.877] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:31.344
W0921 11:29:58.877] Sep 21 11:00:33.358: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.877] Sep 21 11:00:33.358: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.877] Sep 21 11:00:33.358: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.877] Sep 21 11:00:33.358: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.878] Sep 21 11:00:33.358: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.878] Sep 21 11:00:33.389: INFO: Kubelet Metrics: []
W0921 11:29:58.878] Sep 21 11:00:33.392: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.878] Sep 21 11:00:33.392: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.878] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:33.392
W0921 11:29:58.878] Sep 21 11:00:35.405: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.878] Sep 21 11:00:35.405: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:58.879] Sep 21 11:00:35.405: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.879] Sep 21 11:00:35.405: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.879] Sep 21 11:00:35.405: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.879] Sep 21 11:00:35.416: INFO: Kubelet Metrics: []
W0921 11:29:58.879] Sep 21 11:00:35.419: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.880] Sep 21 11:00:35.419: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.880] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:35.419
W0921 11:29:58.880] Sep 21 11:00:37.434: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.880] Sep 21 11:00:37.434: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.880] Sep 21 11:00:37.434: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.881] Sep 21 11:00:37.434: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.881] Sep 21 11:00:37.434: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.881] Sep 21 11:00:37.443: INFO: Kubelet Metrics: []
W0921 11:29:58.881] Sep 21 11:00:37.446: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.881] Sep 21 11:00:37.446: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.882] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:37.446
W0921 11:29:58.882] Sep 21 11:00:39.458: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.882] Sep 21 11:00:39.458: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.882] Sep 21 11:00:39.458: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.882] Sep 21 11:00:39.458: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.882] Sep 21 11:00:39.458: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.883] Sep 21 11:00:39.476: INFO: Kubelet Metrics: []
W0921 11:29:58.883] Sep 21 11:00:39.482: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.883] Sep 21 11:00:39.482: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.883] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:39.482
W0921 11:29:58.883] Sep 21 11:00:41.495: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.884] Sep 21 11:00:41.495: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.884] Sep 21 11:00:41.495: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.884] Sep 21 11:00:41.495: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.884] Sep 21 11:00:41.495: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.884] Sep 21 11:00:41.506: INFO: Kubelet Metrics: []
W0921 11:29:58.884] Sep 21 11:00:41.508: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.885] Sep 21 11:00:41.508: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.885] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:41.508
W0921 11:29:58.885] Sep 21 11:00:43.524: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.885] Sep 21 11:00:43.524: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.885] Sep 21 11:00:43.524: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.886] Sep 21 11:00:43.524: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.886] Sep 21 11:00:43.524: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.886] Sep 21 11:00:43.540: INFO: Kubelet Metrics: []
W0921 11:29:58.886] Sep 21 11:00:43.548: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.886] Sep 21 11:00:43.548: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.886] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:43.548
W0921 11:29:58.887] Sep 21 11:00:45.560: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.887] Sep 21 11:00:45.560: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.887] Sep 21 11:00:45.560: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.887] Sep 21 11:00:45.560: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.887] Sep 21 11:00:45.560: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.887] Sep 21 11:00:45.570: INFO: Kubelet Metrics: []
W0921 11:29:58.887] Sep 21 11:00:45.573: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.888] Sep 21 11:00:45.573: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.888] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:45.573
W0921 11:29:58.888] Sep 21 11:00:47.585: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.888] Sep 21 11:00:47.585: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.888] Sep 21 11:00:47.585: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.889] Sep 21 11:00:47.585: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.889] Sep 21 11:00:47.585: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.889] Sep 21 11:00:47.596: INFO: Kubelet Metrics: []
W0921 11:29:58.889] Sep 21 11:00:47.598: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.889] Sep 21 11:00:47.598: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.889] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:47.598
W0921 11:29:58.889] Sep 21 11:00:49.614: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.890] Sep 21 11:00:49.614: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.890] Sep 21 11:00:49.614: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.890] Sep 21 11:00:49.614: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.890] Sep 21 11:00:49.614: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.890] Sep 21 11:00:49.625: INFO: Kubelet Metrics: []
W0921 11:29:58.891] Sep 21 11:00:49.627: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.891] Sep 21 11:00:49.627: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.891] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:49.627
W0921 11:29:58.891] Sep 21 11:00:51.642: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.891] Sep 21 11:00:51.642: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.892] Sep 21 11:00:51.642: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.892] Sep 21 11:00:51.642: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.892] Sep 21 11:00:51.642: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.892] Sep 21 11:00:51.661: INFO: Kubelet Metrics: []
W0921 11:29:58.892] Sep 21 11:00:51.663: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.892] Sep 21 11:00:51.663: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.893] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:51.663
W0921 11:29:58.893] Sep 21 11:00:53.676: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.893] Sep 21 11:00:53.676: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.893] Sep 21 11:00:53.676: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.893] Sep 21 11:00:53.676: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.894] Sep 21 11:00:53.676: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.894] Sep 21 11:00:53.693: INFO: Kubelet Metrics: []
W0921 11:29:58.894] Sep 21 11:00:53.701: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.894] Sep 21 11:00:53.701: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.894] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:53.701
W0921 11:29:58.895] Sep 21 11:00:55.717: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.895] Sep 21 11:00:55.717: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.895] Sep 21 11:00:55.717: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.895] Sep 21 11:00:55.717: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.895] Sep 21 11:00:55.717: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.895] Sep 21 11:00:55.728: INFO: Kubelet Metrics: []
W0921 11:29:58.896] Sep 21 11:00:55.731: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.896] Sep 21 11:00:55.731: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.896] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:55.731
W0921 11:29:58.896] Sep 21 11:00:57.743: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.896] Sep 21 11:00:57.743: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.897] Sep 21 11:00:57.743: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.897] Sep 21 11:00:57.743: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.897] Sep 21 11:00:57.743: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.897] Sep 21 11:00:57.755: INFO: Kubelet Metrics: []
W0921 11:29:58.897] Sep 21 11:00:57.757: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.898] Sep 21 11:00:57.757: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.898] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:57.757
W0921 11:29:58.898] Sep 21 11:00:59.770: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.898] Sep 21 11:00:59.770: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.898] Sep 21 11:00:59.770: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.898] Sep 21 11:00:59.770: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.899] Sep 21 11:00:59.770: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.899] Sep 21 11:00:59.781: INFO: Kubelet Metrics: []
W0921 11:29:58.899] Sep 21 11:00:59.783: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.899] Sep 21 11:00:59.784: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.900] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:59.784
W0921 11:29:58.900] Sep 21 11:01:01.800: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.900] Sep 21 11:01:01.800: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.900] Sep 21 11:01:01.800: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.900] Sep 21 11:01:01.800: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.901] Sep 21 11:01:01.800: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.901] Sep 21 11:01:01.811: INFO: Kubelet Metrics: []
W0921 11:29:58.901] Sep 21 11:01:01.813: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.901] Sep 21 11:01:01.813: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.902] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:01.813
W0921 11:29:58.902] Sep 21 11:01:03.827: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.902] Sep 21 11:01:03.827: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.902] Sep 21 11:01:03.827: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.902] Sep 21 11:01:03.827: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.903] Sep 21 11:01:03.827: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.903] Sep 21 11:01:03.851: INFO: Kubelet Metrics: []
W0921 11:29:58.903] Sep 21 11:01:03.855: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.903] Sep 21 11:01:03.855: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.904] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:03.855
W0921 11:29:58.904] Sep 21 11:01:05.869: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.904] Sep 21 11:01:05.869: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.904] Sep 21 11:01:05.869: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.904] Sep 21 11:01:05.869: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.905] Sep 21 11:01:05.869: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.905] Sep 21 11:01:05.881: INFO: Kubelet Metrics: []
W0921 11:29:58.905] Sep 21 11:01:05.884: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.905] Sep 21 11:01:05.884: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.906] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:05.884
W0921 11:29:58.906] Sep 21 11:01:07.900: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.906] Sep 21 11:01:07.900: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.906] Sep 21 11:01:07.900: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.906] Sep 21 11:01:07.900: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.907] Sep 21 11:01:07.900: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.907] Sep 21 11:01:07.911: INFO: Kubelet Metrics: []
W0921 11:29:58.907] Sep 21 11:01:07.913: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.907] Sep 21 11:01:07.913: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.908] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:07.913
W0921 11:29:58.908] Sep 21 11:01:09.926: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.908] Sep 21 11:01:09.926: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.908] Sep 21 11:01:09.926: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.908] Sep 21 11:01:09.926: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.909] Sep 21 11:01:09.926: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.909] Sep 21 11:01:09.938: INFO: Kubelet Metrics: []
W0921 11:29:58.909] Sep 21 11:01:09.940: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.909] Sep 21 11:01:09.940: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.910] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:09.94
W0921 11:29:58.910] Sep 21 11:01:11.962: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.910] Sep 21 11:01:11.962: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.910] Sep 21 11:01:11.962: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.910] Sep 21 11:01:11.962: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.911] Sep 21 11:01:11.962: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.911] Sep 21 11:01:11.975: INFO: Kubelet Metrics: []
W0921 11:29:58.911] Sep 21 11:01:11.978: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.911] Sep 21 11:01:11.978: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.911] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:11.978
W0921 11:29:58.911] Sep 21 11:01:13.993: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.912] Sep 21 11:01:13.993: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.912] Sep 21 11:01:13.993: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.912] Sep 21 11:01:13.993: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.912] Sep 21 11:01:13.993: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.912] Sep 21 11:01:14.004: INFO: Kubelet Metrics: []
W0921 11:29:58.912] Sep 21 11:01:14.007: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.913] Sep 21 11:01:14.007: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.913] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:14.007
W0921 11:29:58.913] Sep 21 11:01:16.019: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.913] Sep 21 11:01:16.019: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.913] Sep 21 11:01:16.019: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.914] Sep 21 11:01:16.019: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.914] Sep 21 11:01:16.019: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.914] Sep 21 11:01:16.032: INFO: Kubelet Metrics: []
W0921 11:29:58.914] Sep 21 11:01:16.037: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.914] Sep 21 11:01:16.037: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.914] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:16.037
W0921 11:29:58.915] Sep 21 11:01:18.052: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.915] Sep 21 11:01:18.052: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.915] Sep 21 11:01:18.052: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.915] Sep 21 11:01:18.052: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.915] Sep 21 11:01:18.052: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.915] Sep 21 11:01:18.064: INFO: Kubelet Metrics: []
W0921 11:29:58.916] Sep 21 11:01:18.066: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.916] Sep 21 11:01:18.066: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.916] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:18.066
W0921 11:29:58.916] Sep 21 11:01:20.082: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.916] Sep 21 11:01:20.082: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.917] Sep 21 11:01:20.082: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.917] Sep 21 11:01:20.082: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.917] Sep 21 11:01:20.082: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.917] Sep 21 11:01:20.093: INFO: Kubelet Metrics: []
W0921 11:29:58.917] Sep 21 11:01:20.096: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.917] Sep 21 11:01:20.096: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.918] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:20.096
W0921 11:29:58.918] Sep 21 11:01:22.108: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.918] Sep 21 11:01:22.108: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.918] Sep 21 11:01:22.108: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.918] Sep 21 11:01:22.108: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.919] Sep 21 11:01:22.108: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.919] Sep 21 11:01:22.119: INFO: Kubelet Metrics: []
W0921 11:29:58.919] Sep 21 11:01:22.121: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.919] Sep 21 11:01:22.121: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.919] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:22.122
W0921 11:29:58.920] Sep 21 11:01:24.133: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.920] Sep 21 11:01:24.133: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.920] Sep 21 11:01:24.133: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.920] Sep 21 11:01:24.133: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.920] Sep 21 11:01:24.133: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.920] Sep 21 11:01:24.145: INFO: Kubelet Metrics: []
W0921 11:29:58.921] Sep 21 11:01:24.147: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.921] Sep 21 11:01:24.147: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.921] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:24.147
W0921 11:29:58.921] Sep 21 11:01:26.160: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.921] Sep 21 11:01:26.160: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:58.922] Sep 21 11:01:26.160: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.922] Sep 21 11:01:26.160: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.922] Sep 21 11:01:26.160: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.922] Sep 21 11:01:26.181: INFO: Kubelet Metrics: []
W0921 11:29:58.922] Sep 21 11:01:26.184: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:58.923] Sep 21 11:01:26.184: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.923] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:26.184
W0921 11:29:58.923] STEP: checking for correctly formatted eviction events 09/21/22 11:01:27.264
W0921 11:29:58.923] [AfterEach] TOP-LEVEL
W0921 11:29:58.923]   test/e2e_node/eviction_test.go:592
W0921 11:29:58.923] STEP: deleting pods 09/21/22 11:01:27.267
W0921 11:29:58.924] STEP: deleting pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod 09/21/22 11:01:27.267
W0921 11:29:58.924] Sep 21 11:01:27.272: INFO: Waiting for pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod to disappear
... skipping 53 lines ...
W0921 11:29:58.934] 
W0921 11:29:58.935] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.935] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.935] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.935] 1 loaded units listed.
W0921 11:29:58.935] , kubelet-20220921T102832
W0921 11:29:58.935] W0921 11:02:03.445601    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:60670->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.936] STEP: Starting the kubelet 09/21/22 11:02:03.455
W0921 11:29:58.936] W0921 11:02:03.490070    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.936] Sep 21 11:02:08.493: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.936] Sep 21 11:02:09.495: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.937] Sep 21 11:02:10.499: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.937] Sep 21 11:02:11.502: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.937] Sep 21 11:02:12.505: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:58.938] Sep 21 11:02:13.508: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 34 lines ...
W0921 11:29:58.944] 
W0921 11:29:58.945]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:58.945]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:58.945]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:58.945]     1 loaded units listed.
W0921 11:29:58.945]     , kubelet-20220921T102832
W0921 11:29:58.945]     W0921 10:58:33.305530    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:60742->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:58.946]     STEP: Starting the kubelet 09/21/22 10:58:33.314
W0921 11:29:58.946]     W0921 10:58:33.347685    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:58.946]     Sep 21 10:58:38.354: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.947]     Sep 21 10:58:39.356: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.947]     Sep 21 10:58:40.359: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.947]     Sep 21 10:58:41.361: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.948]     Sep 21 10:58:42.364: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:58.948]     Sep 21 10:58:43.367: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 24 lines ...
W0921 11:29:58.954]     STEP: Waiting for evictions to occur 09/21/22 10:59:18.446
W0921 11:29:58.954]     Sep 21 10:59:18.459: INFO: Kubelet Metrics: []
W0921 11:29:58.954]     Sep 21 10:59:18.469: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.955]     Sep 21 10:59:18.469: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.955]     Sep 21 10:59:18.471: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.955]     Sep 21 10:59:18.471: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.955]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:18.472
W0921 11:29:58.956]     Sep 21 10:59:20.485: INFO: Kubelet Metrics: []
W0921 11:29:58.956]     Sep 21 10:59:20.496: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.956]     Sep 21 10:59:20.496: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.956]     Sep 21 10:59:20.498: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.957]     Sep 21 10:59:20.498: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.957]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:20.498
W0921 11:29:58.957]     Sep 21 10:59:22.523: INFO: Kubelet Metrics: []
W0921 11:29:58.957]     Sep 21 10:59:22.535: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.957]     Sep 21 10:59:22.535: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
W0921 11:29:58.958]     Sep 21 10:59:22.537: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.958]     Sep 21 10:59:22.537: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.958]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:22.537
W0921 11:29:58.958]     Sep 21 10:59:24.548: INFO: Kubelet Metrics: []
W0921 11:29:58.959]     Sep 21 10:59:24.559: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.959]     Sep 21 10:59:24.560: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.959]     Sep 21 10:59:24.562: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.959]     Sep 21 10:59:24.562: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.960]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:24.562
W0921 11:29:58.960]     Sep 21 10:59:26.574: INFO: Kubelet Metrics: []
W0921 11:29:58.960]     Sep 21 10:59:26.592: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.960]     Sep 21 10:59:26.592: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.960]     Sep 21 10:59:26.595: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.961]     Sep 21 10:59:26.595: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.961]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:26.595
W0921 11:29:58.961]     Sep 21 10:59:28.606: INFO: Kubelet Metrics: []
W0921 11:29:58.961]     Sep 21 10:59:28.619: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.962]     Sep 21 10:59:28.619: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.962]     Sep 21 10:59:28.619: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.962]     Sep 21 10:59:28.619: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.962]     Sep 21 10:59:28.619: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.962]     Sep 21 10:59:28.621: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.963]     Sep 21 10:59:28.621: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.963]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:28.621
W0921 11:29:58.963]     Sep 21 10:59:30.635: INFO: Kubelet Metrics: []
W0921 11:29:58.963]     Sep 21 10:59:30.661: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.963]     Sep 21 10:59:30.661: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.964]     Sep 21 10:59:30.661: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.964]     Sep 21 10:59:30.661: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.964]     Sep 21 10:59:30.661: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.964]     Sep 21 10:59:30.664: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.964]     Sep 21 10:59:30.664: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.965]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:30.664
W0921 11:29:58.965]     Sep 21 10:59:32.677: INFO: Kubelet Metrics: []
W0921 11:29:58.965]     Sep 21 10:59:32.690: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.965]     Sep 21 10:59:32.690: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
W0921 11:29:58.965]     Sep 21 10:59:32.690: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.966]     Sep 21 10:59:32.690: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.966]     Sep 21 10:59:32.690: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.966]     Sep 21 10:59:32.692: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.966]     Sep 21 10:59:32.692: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.967]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:32.692
W0921 11:29:58.967]     Sep 21 10:59:34.704: INFO: Kubelet Metrics: []
W0921 11:29:58.967]     Sep 21 10:59:34.717: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.967]     Sep 21 10:59:34.717: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.967]     Sep 21 10:59:34.717: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.968]     Sep 21 10:59:34.717: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.968]     Sep 21 10:59:34.717: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.968]     Sep 21 10:59:34.717: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.968]     Sep 21 10:59:34.717: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.968]     Sep 21 10:59:34.720: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.969]     Sep 21 10:59:34.720: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.969]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:34.72
W0921 11:29:58.969]     Sep 21 10:59:36.738: INFO: Kubelet Metrics: []
W0921 11:29:58.969]     Sep 21 10:59:36.756: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.969]     Sep 21 10:59:36.756: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.970]     Sep 21 10:59:36.756: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.970]     Sep 21 10:59:36.756: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.970]     Sep 21 10:59:36.756: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.970]     Sep 21 10:59:36.756: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.970]     Sep 21 10:59:36.756: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.971]     Sep 21 10:59:36.759: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.971]     Sep 21 10:59:36.760: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.971]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:36.76
W0921 11:29:58.971]     Sep 21 10:59:38.774: INFO: Kubelet Metrics: []
W0921 11:29:58.971]     Sep 21 10:59:38.798: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.972]     Sep 21 10:59:38.798: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.972]     Sep 21 10:59:38.798: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.972]     Sep 21 10:59:38.798: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.972]     Sep 21 10:59:38.798: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.972]     Sep 21 10:59:38.798: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.972]     Sep 21 10:59:38.798: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.973]     Sep 21 10:59:38.800: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.973]     Sep 21 10:59:38.800: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.973]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:38.8
W0921 11:29:58.973]     Sep 21 10:59:40.811: INFO: Kubelet Metrics: []
W0921 11:29:58.973]     Sep 21 10:59:40.823: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.973]     Sep 21 10:59:40.823: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.974]     Sep 21 10:59:40.823: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.974]     Sep 21 10:59:40.823: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.974]     Sep 21 10:59:40.823: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.974]     Sep 21 10:59:40.823: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.974]     Sep 21 10:59:40.823: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.975]     Sep 21 10:59:40.825: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.975]     Sep 21 10:59:40.825: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.975]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:40.826
W0921 11:29:58.975]     Sep 21 10:59:42.839: INFO: Kubelet Metrics: []
W0921 11:29:58.975]     Sep 21 10:59:42.852: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.976]     Sep 21 10:59:42.852: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.976]     Sep 21 10:59:42.852: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.976]     Sep 21 10:59:42.852: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.976]     Sep 21 10:59:42.852: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.976]     Sep 21 10:59:42.852: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.977]     Sep 21 10:59:42.852: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.977]     Sep 21 10:59:42.854: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.977]     Sep 21 10:59:42.854: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.977]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:42.854
W0921 11:29:58.977]     Sep 21 10:59:44.867: INFO: Kubelet Metrics: []
W0921 11:29:58.977]     Sep 21 10:59:44.878: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.978]     Sep 21 10:59:44.878: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.978]     Sep 21 10:59:44.878: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.978]     Sep 21 10:59:44.878: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.978]     Sep 21 10:59:44.878: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.978]     Sep 21 10:59:44.878: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.978]     Sep 21 10:59:44.878: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.979]     Sep 21 10:59:44.880: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.979]     Sep 21 10:59:44.880: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.979]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:44.88
W0921 11:29:58.979]     Sep 21 10:59:46.892: INFO: Kubelet Metrics: []
W0921 11:29:58.979]     Sep 21 10:59:46.910: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.980]     Sep 21 10:59:46.910: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
W0921 11:29:58.980]     Sep 21 10:59:46.910: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.980]     Sep 21 10:59:46.910: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.980]     Sep 21 10:59:46.910: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.981]     Sep 21 10:59:46.910: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.981]     Sep 21 10:59:46.910: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.981]     Sep 21 10:59:46.913: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.981]     Sep 21 10:59:46.913: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.981]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:46.913
W0921 11:29:58.982]     Sep 21 10:59:48.925: INFO: Kubelet Metrics: []
W0921 11:29:58.982]     Sep 21 10:59:48.938: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.982]     Sep 21 10:59:48.939: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.982]     Sep 21 10:59:48.939: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.983]     Sep 21 10:59:48.939: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.983]     Sep 21 10:59:48.939: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.983]     Sep 21 10:59:48.939: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.983]     Sep 21 10:59:48.939: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.983]     Sep 21 10:59:48.941: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.984]     Sep 21 10:59:48.941: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.984]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:48.941
W0921 11:29:58.984]     Sep 21 10:59:50.959: INFO: Kubelet Metrics: []
W0921 11:29:58.984]     Sep 21 10:59:50.976: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.984]     Sep 21 10:59:50.976: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.985]     Sep 21 10:59:50.976: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.985]     Sep 21 10:59:50.976: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.985]     Sep 21 10:59:50.976: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.985]     Sep 21 10:59:50.976: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.985]     Sep 21 10:59:50.976: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.986]     Sep 21 10:59:50.979: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.986]     Sep 21 10:59:50.979: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.986]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:50.979
W0921 11:29:58.986]     Sep 21 10:59:52.991: INFO: Kubelet Metrics: []
W0921 11:29:58.986]     Sep 21 10:59:53.003: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.986]     Sep 21 10:59:53.003: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.987]     Sep 21 10:59:53.003: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.987]     Sep 21 10:59:53.003: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.987]     Sep 21 10:59:53.003: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.987]     Sep 21 10:59:53.003: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.987]     Sep 21 10:59:53.003: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.988]     Sep 21 10:59:53.005: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.988]     Sep 21 10:59:53.005: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.988]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:53.005
W0921 11:29:58.988]     Sep 21 10:59:55.019: INFO: Kubelet Metrics: []
W0921 11:29:58.988]     Sep 21 10:59:55.031: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.988]     Sep 21 10:59:55.031: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.989]     Sep 21 10:59:55.031: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.989]     Sep 21 10:59:55.031: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.989]     Sep 21 10:59:55.031: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.989]     Sep 21 10:59:55.031: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.989]     Sep 21 10:59:55.031: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.989]     Sep 21 10:59:55.033: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.989]     Sep 21 10:59:55.033: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.990]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:55.033
W0921 11:29:58.990]     Sep 21 10:59:57.056: INFO: Kubelet Metrics: []
W0921 11:29:58.990]     Sep 21 10:59:57.073: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.990]     Sep 21 10:59:57.074: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.990]     Sep 21 10:59:57.074: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.990]     Sep 21 10:59:57.074: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.991]     Sep 21 10:59:57.074: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.991]     Sep 21 10:59:57.074: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.991]     Sep 21 10:59:57.074: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.991]     Sep 21 10:59:57.076: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.991]     Sep 21 10:59:57.076: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.991]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:57.076
W0921 11:29:58.992]     Sep 21 10:59:59.087: INFO: Kubelet Metrics: []
W0921 11:29:58.992]     Sep 21 10:59:59.098: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.992]     Sep 21 10:59:59.098: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.992]     Sep 21 10:59:59.098: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.992]     Sep 21 10:59:59.098: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.993]     Sep 21 10:59:59.098: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.993]     Sep 21 10:59:59.098: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.993]     Sep 21 10:59:59.098: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.993]     Sep 21 10:59:59.101: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.993]     Sep 21 10:59:59.101: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.993]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:59.101
W0921 11:29:58.994]     Sep 21 11:00:01.114: INFO: Kubelet Metrics: []
W0921 11:29:58.994]     Sep 21 11:00:01.126: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.994]     Sep 21 11:00:01.126: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.994]     Sep 21 11:00:01.126: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.994]     Sep 21 11:00:01.126: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.994]     Sep 21 11:00:01.126: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.995]     Sep 21 11:00:01.126: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.995]     Sep 21 11:00:01.126: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.995]     Sep 21 11:00:01.129: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.995]     Sep 21 11:00:01.129: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.995]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:01.129
W0921 11:29:58.996]     Sep 21 11:00:03.143: INFO: Kubelet Metrics: []
W0921 11:29:58.996]     Sep 21 11:00:03.155: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.996]     Sep 21 11:00:03.155: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.996]     Sep 21 11:00:03.155: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.996]     Sep 21 11:00:03.155: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.997]     Sep 21 11:00:03.155: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.997]     Sep 21 11:00:03.155: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.997]     Sep 21 11:00:03.155: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.997]     Sep 21 11:00:03.157: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.997]     Sep 21 11:00:03.157: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.997]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:03.157
W0921 11:29:58.997]     Sep 21 11:00:05.175: INFO: Kubelet Metrics: []
W0921 11:29:58.998]     Sep 21 11:00:05.194: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.998]     Sep 21 11:00:05.194: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
W0921 11:29:58.998]     Sep 21 11:00:05.194: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:58.998]     Sep 21 11:00:05.194: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:58.998]     Sep 21 11:00:05.194: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:58.998]     Sep 21 11:00:05.194: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
W0921 11:29:58.999]     Sep 21 11:00:05.194: INFO: --- summary Volume: test-volume UsedBytes: 134152192
W0921 11:29:58.999]     Sep 21 11:00:05.197: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.999]     Sep 21 11:00:05.197: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:58.999]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:05.197
W0921 11:29:58.999]     Sep 21 11:00:07.218: INFO: Kubelet Metrics: []
W0921 11:29:58.999]     Sep 21 11:00:07.238: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.000]     Sep 21 11:00:07.238: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.000]     Sep 21 11:00:07.238: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.000]     Sep 21 11:00:07.238: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.000]     Sep 21 11:00:07.238: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.000]     Sep 21 11:00:07.241: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.001]     Sep 21 11:00:07.241: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.001]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:07.241
W0921 11:29:59.001]     STEP: making sure pressure from test has surfaced before continuing 09/21/22 11:00:07.241
W0921 11:29:59.001]     STEP: Waiting for NodeCondition: NoPressure to no longer exist on the node 09/21/22 11:00:27.241
W0921 11:29:59.001]     Sep 21 11:00:27.253: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.002]     Sep 21 11:00:27.253: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.002]     Sep 21 11:00:27.253: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.002]     Sep 21 11:00:27.253: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
... skipping 3 lines ...
W0921 11:29:59.003]     Sep 21 11:00:27.276: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.003]     Sep 21 11:00:27.276: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.003]     Sep 21 11:00:27.276: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.003]     Sep 21 11:00:27.276: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.003]     Sep 21 11:00:27.276: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.003]     Sep 21 11:00:27.286: INFO: Kubelet Metrics: []
W0921 11:29:59.004]     Sep 21 11:00:27.289: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.004]     Sep 21 11:00:27.289: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.004]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:27.289
W0921 11:29:59.004]     Sep 21 11:00:29.302: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.004]     Sep 21 11:00:29.302: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.004]     Sep 21 11:00:29.302: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.005]     Sep 21 11:00:29.302: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.005]     Sep 21 11:00:29.302: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.005]     Sep 21 11:00:29.312: INFO: Kubelet Metrics: []
W0921 11:29:59.005]     Sep 21 11:00:29.315: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.005]     Sep 21 11:00:29.315: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.006]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:29.315
W0921 11:29:59.006]     Sep 21 11:00:31.331: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.006]     Sep 21 11:00:31.331: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.006]     Sep 21 11:00:31.331: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.006]     Sep 21 11:00:31.331: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.006]     Sep 21 11:00:31.331: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.006]     Sep 21 11:00:31.341: INFO: Kubelet Metrics: []
W0921 11:29:59.007]     Sep 21 11:00:31.343: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.007]     Sep 21 11:00:31.344: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.007]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:31.344
W0921 11:29:59.007]     Sep 21 11:00:33.358: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.007]     Sep 21 11:00:33.358: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.008]     Sep 21 11:00:33.358: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.008]     Sep 21 11:00:33.358: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.008]     Sep 21 11:00:33.358: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.008]     Sep 21 11:00:33.389: INFO: Kubelet Metrics: []
W0921 11:29:59.008]     Sep 21 11:00:33.392: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.008]     Sep 21 11:00:33.392: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.009]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:33.392
W0921 11:29:59.009]     Sep 21 11:00:35.405: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.009]     Sep 21 11:00:35.405: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
W0921 11:29:59.009]     Sep 21 11:00:35.405: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.009]     Sep 21 11:00:35.405: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.009]     Sep 21 11:00:35.405: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.010]     Sep 21 11:00:35.416: INFO: Kubelet Metrics: []
W0921 11:29:59.010]     Sep 21 11:00:35.419: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.010]     Sep 21 11:00:35.419: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.010]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:35.419
W0921 11:29:59.010]     Sep 21 11:00:37.434: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.010]     Sep 21 11:00:37.434: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.011]     Sep 21 11:00:37.434: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.011]     Sep 21 11:00:37.434: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.011]     Sep 21 11:00:37.434: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.011]     Sep 21 11:00:37.443: INFO: Kubelet Metrics: []
W0921 11:29:59.011]     Sep 21 11:00:37.446: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.012]     Sep 21 11:00:37.446: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.012]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:37.446
W0921 11:29:59.012]     Sep 21 11:00:39.458: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.012]     Sep 21 11:00:39.458: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.012]     Sep 21 11:00:39.458: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.012]     Sep 21 11:00:39.458: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.013]     Sep 21 11:00:39.458: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.013]     Sep 21 11:00:39.476: INFO: Kubelet Metrics: []
W0921 11:29:59.013]     Sep 21 11:00:39.482: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.013]     Sep 21 11:00:39.482: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.013]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:39.482
W0921 11:29:59.013]     Sep 21 11:00:41.495: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.014]     Sep 21 11:00:41.495: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.014]     Sep 21 11:00:41.495: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.014]     Sep 21 11:00:41.495: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.014]     Sep 21 11:00:41.495: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.015]     Sep 21 11:00:41.506: INFO: Kubelet Metrics: []
W0921 11:29:59.015]     Sep 21 11:00:41.508: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.015]     Sep 21 11:00:41.508: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.015]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:41.508
W0921 11:29:59.015]     Sep 21 11:00:43.524: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.016]     Sep 21 11:00:43.524: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.016]     Sep 21 11:00:43.524: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.016]     Sep 21 11:00:43.524: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.016]     Sep 21 11:00:43.524: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.016]     Sep 21 11:00:43.540: INFO: Kubelet Metrics: []
W0921 11:29:59.017]     Sep 21 11:00:43.548: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.017]     Sep 21 11:00:43.548: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.017]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:43.548
W0921 11:29:59.017]     Sep 21 11:00:45.560: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.017]     Sep 21 11:00:45.560: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.018]     Sep 21 11:00:45.560: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.018]     Sep 21 11:00:45.560: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.018]     Sep 21 11:00:45.560: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.018]     Sep 21 11:00:45.570: INFO: Kubelet Metrics: []
W0921 11:29:59.018]     Sep 21 11:00:45.573: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.019]     Sep 21 11:00:45.573: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.019]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:45.573
W0921 11:29:59.019]     Sep 21 11:00:47.585: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.019]     Sep 21 11:00:47.585: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.020]     Sep 21 11:00:47.585: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.020]     Sep 21 11:00:47.585: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.020]     Sep 21 11:00:47.585: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.020]     Sep 21 11:00:47.596: INFO: Kubelet Metrics: []
W0921 11:29:59.020]     Sep 21 11:00:47.598: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.021]     Sep 21 11:00:47.598: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.021]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:47.598
W0921 11:29:59.021]     Sep 21 11:00:49.614: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.021]     Sep 21 11:00:49.614: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.021]     Sep 21 11:00:49.614: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.022]     Sep 21 11:00:49.614: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.022]     Sep 21 11:00:49.614: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.022]     Sep 21 11:00:49.625: INFO: Kubelet Metrics: []
W0921 11:29:59.022]     Sep 21 11:00:49.627: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.022]     Sep 21 11:00:49.627: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.022]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:49.627
W0921 11:29:59.023]     Sep 21 11:00:51.642: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.023]     Sep 21 11:00:51.642: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.023]     Sep 21 11:00:51.642: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.023]     Sep 21 11:00:51.642: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.023]     Sep 21 11:00:51.642: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.024]     Sep 21 11:00:51.661: INFO: Kubelet Metrics: []
W0921 11:29:59.024]     Sep 21 11:00:51.663: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.024]     Sep 21 11:00:51.663: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.024]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:51.663
W0921 11:29:59.024]     Sep 21 11:00:53.676: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.025]     Sep 21 11:00:53.676: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.025]     Sep 21 11:00:53.676: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.025]     Sep 21 11:00:53.676: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.025]     Sep 21 11:00:53.676: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.025]     Sep 21 11:00:53.693: INFO: Kubelet Metrics: []
W0921 11:29:59.025]     Sep 21 11:00:53.701: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.026]     Sep 21 11:00:53.701: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.026]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:53.701
W0921 11:29:59.026]     Sep 21 11:00:55.717: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.026]     Sep 21 11:00:55.717: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.026]     Sep 21 11:00:55.717: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.027]     Sep 21 11:00:55.717: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.027]     Sep 21 11:00:55.717: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.027]     Sep 21 11:00:55.728: INFO: Kubelet Metrics: []
W0921 11:29:59.027]     Sep 21 11:00:55.731: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.027]     Sep 21 11:00:55.731: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.028]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:55.731
W0921 11:29:59.028]     Sep 21 11:00:57.743: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.028]     Sep 21 11:00:57.743: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.028]     Sep 21 11:00:57.743: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.028]     Sep 21 11:00:57.743: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.029]     Sep 21 11:00:57.743: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.029]     Sep 21 11:00:57.755: INFO: Kubelet Metrics: []
W0921 11:29:59.029]     Sep 21 11:00:57.757: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.029]     Sep 21 11:00:57.757: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.029]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:57.757
W0921 11:29:59.030]     Sep 21 11:00:59.770: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.030]     Sep 21 11:00:59.770: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.030]     Sep 21 11:00:59.770: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.030]     Sep 21 11:00:59.770: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.030]     Sep 21 11:00:59.770: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.030]     Sep 21 11:00:59.781: INFO: Kubelet Metrics: []
W0921 11:29:59.031]     Sep 21 11:00:59.783: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.031]     Sep 21 11:00:59.784: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.031]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:59.784
W0921 11:29:59.031]     Sep 21 11:01:01.800: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.031]     Sep 21 11:01:01.800: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.032]     Sep 21 11:01:01.800: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.032]     Sep 21 11:01:01.800: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.032]     Sep 21 11:01:01.800: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.032]     Sep 21 11:01:01.811: INFO: Kubelet Metrics: []
W0921 11:29:59.032]     Sep 21 11:01:01.813: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.033]     Sep 21 11:01:01.813: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.033]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:01.813
W0921 11:29:59.033]     Sep 21 11:01:03.827: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.033]     Sep 21 11:01:03.827: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.033]     Sep 21 11:01:03.827: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.034]     Sep 21 11:01:03.827: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.034]     Sep 21 11:01:03.827: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.034]     Sep 21 11:01:03.851: INFO: Kubelet Metrics: []
W0921 11:29:59.034]     Sep 21 11:01:03.855: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.034]     Sep 21 11:01:03.855: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.035]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:03.855
W0921 11:29:59.035]     Sep 21 11:01:05.869: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.035]     Sep 21 11:01:05.869: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.035]     Sep 21 11:01:05.869: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.035]     Sep 21 11:01:05.869: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.036]     Sep 21 11:01:05.869: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.036]     Sep 21 11:01:05.881: INFO: Kubelet Metrics: []
W0921 11:29:59.036]     Sep 21 11:01:05.884: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.036]     Sep 21 11:01:05.884: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.036]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:05.884
W0921 11:29:59.037]     Sep 21 11:01:07.900: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.037]     Sep 21 11:01:07.900: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.037]     Sep 21 11:01:07.900: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.037]     Sep 21 11:01:07.900: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.037]     Sep 21 11:01:07.900: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.038]     Sep 21 11:01:07.911: INFO: Kubelet Metrics: []
W0921 11:29:59.038]     Sep 21 11:01:07.913: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.038]     Sep 21 11:01:07.913: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.038]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:07.913
W0921 11:29:59.038]     Sep 21 11:01:09.926: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.039]     Sep 21 11:01:09.926: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.039]     Sep 21 11:01:09.926: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.039]     Sep 21 11:01:09.926: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.039]     Sep 21 11:01:09.926: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.040]     Sep 21 11:01:09.938: INFO: Kubelet Metrics: []
W0921 11:29:59.040]     Sep 21 11:01:09.940: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.040]     Sep 21 11:01:09.940: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.040]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:09.94
W0921 11:29:59.040]     Sep 21 11:01:11.962: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.041]     Sep 21 11:01:11.962: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.041]     Sep 21 11:01:11.962: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.041]     Sep 21 11:01:11.962: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.041]     Sep 21 11:01:11.962: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.041]     Sep 21 11:01:11.975: INFO: Kubelet Metrics: []
W0921 11:29:59.042]     Sep 21 11:01:11.978: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.042]     Sep 21 11:01:11.978: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.042]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:11.978
W0921 11:29:59.042]     Sep 21 11:01:13.993: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.042]     Sep 21 11:01:13.993: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.043]     Sep 21 11:01:13.993: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.043]     Sep 21 11:01:13.993: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.043]     Sep 21 11:01:13.993: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.043]     Sep 21 11:01:14.004: INFO: Kubelet Metrics: []
W0921 11:29:59.043]     Sep 21 11:01:14.007: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.044]     Sep 21 11:01:14.007: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.044]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:14.007
W0921 11:29:59.044]     Sep 21 11:01:16.019: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.044]     Sep 21 11:01:16.019: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.044]     Sep 21 11:01:16.019: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.045]     Sep 21 11:01:16.019: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.045]     Sep 21 11:01:16.019: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.045]     Sep 21 11:01:16.032: INFO: Kubelet Metrics: []
W0921 11:29:59.045]     Sep 21 11:01:16.037: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.045]     Sep 21 11:01:16.037: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.046]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:16.037
W0921 11:29:59.046]     Sep 21 11:01:18.052: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.046]     Sep 21 11:01:18.052: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.046]     Sep 21 11:01:18.052: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.046]     Sep 21 11:01:18.052: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.047]     Sep 21 11:01:18.052: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.047]     Sep 21 11:01:18.064: INFO: Kubelet Metrics: []
W0921 11:29:59.047]     Sep 21 11:01:18.066: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.047]     Sep 21 11:01:18.066: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.047]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:18.066
W0921 11:29:59.048]     Sep 21 11:01:20.082: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.048]     Sep 21 11:01:20.082: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.048]     Sep 21 11:01:20.082: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.048]     Sep 21 11:01:20.082: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.048]     Sep 21 11:01:20.082: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.049]     Sep 21 11:01:20.093: INFO: Kubelet Metrics: []
W0921 11:29:59.049]     Sep 21 11:01:20.096: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.049]     Sep 21 11:01:20.096: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.049]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:20.096
W0921 11:29:59.049]     Sep 21 11:01:22.108: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.050]     Sep 21 11:01:22.108: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.050]     Sep 21 11:01:22.108: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.050]     Sep 21 11:01:22.108: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.050]     Sep 21 11:01:22.108: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.050]     Sep 21 11:01:22.119: INFO: Kubelet Metrics: []
W0921 11:29:59.051]     Sep 21 11:01:22.121: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.051]     Sep 21 11:01:22.121: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.051]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:22.122
W0921 11:29:59.051]     Sep 21 11:01:24.133: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.052]     Sep 21 11:01:24.133: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.052]     Sep 21 11:01:24.133: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.052]     Sep 21 11:01:24.133: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.052]     Sep 21 11:01:24.133: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.052]     Sep 21 11:01:24.145: INFO: Kubelet Metrics: []
W0921 11:29:59.053]     Sep 21 11:01:24.147: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.053]     Sep 21 11:01:24.147: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.053]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:24.147
W0921 11:29:59.053]     Sep 21 11:01:26.160: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.053]     Sep 21 11:01:26.160: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
W0921 11:29:59.054]     Sep 21 11:01:26.160: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
W0921 11:29:59.054]     Sep 21 11:01:26.160: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
W0921 11:29:59.054]     Sep 21 11:01:26.160: INFO: --- summary Volume: test-volume UsedBytes: 67043328
W0921 11:29:59.054]     Sep 21 11:01:26.181: INFO: Kubelet Metrics: []
W0921 11:29:59.054]     Sep 21 11:01:26.184: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
W0921 11:29:59.055]     Sep 21 11:01:26.184: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
W0921 11:29:59.055]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:26.184
W0921 11:29:59.055]     STEP: checking for correctly formatted eviction events 09/21/22 11:01:27.264
W0921 11:29:59.055]     [AfterEach] TOP-LEVEL
W0921 11:29:59.055]       test/e2e_node/eviction_test.go:592
W0921 11:29:59.055]     STEP: deleting pods 09/21/22 11:01:27.267
W0921 11:29:59.056]     STEP: deleting pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod 09/21/22 11:01:27.267
W0921 11:29:59.056]     Sep 21 11:01:27.272: INFO: Waiting for pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod to disappear
... skipping 53 lines ...
W0921 11:29:59.067] 
W0921 11:29:59.067]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.068]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.068]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.068]     1 loaded units listed.
W0921 11:29:59.068]     , kubelet-20220921T102832
W0921 11:29:59.068]     W0921 11:02:03.445601    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:60670->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.068]     STEP: Starting the kubelet 09/21/22 11:02:03.455
W0921 11:29:59.069]     W0921 11:02:03.490070    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.069]     Sep 21 11:02:08.493: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.069]     Sep 21 11:02:09.495: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.070]     Sep 21 11:02:10.499: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.070]     Sep 21 11:02:11.502: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.070]     Sep 21 11:02:12.505: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.071]     Sep 21 11:02:13.508: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 71 lines ...
W0921 11:29:59.084] 
W0921 11:29:59.084] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.084] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.084] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.084] 1 loaded units listed.
W0921 11:29:59.085] , kubelet-20220921T102832
W0921 11:29:59.085] W0921 11:02:14.644492    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.085] STEP: Starting the kubelet 09/21/22 11:02:14.654
W0921 11:29:59.085] W0921 11:02:14.690773    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.086] Sep 21 11:02:19.705: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.086] Sep 21 11:02:20.709: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.086] Sep 21 11:02:21.711: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.087] Sep 21 11:02:22.714: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.087] Sep 21 11:02:23.717: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.088] Sep 21 11:02:24.720: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 12 lines ...
W0921 11:29:59.090] 
W0921 11:29:59.090] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.090] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.090] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.091] 1 loaded units listed.
W0921 11:29:59.091] , kubelet-20220921T102832
W0921 11:29:59.091] W0921 11:02:33.323937    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:33908->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.091] STEP: Starting the kubelet 09/21/22 11:02:33.334
W0921 11:29:59.091] W0921 11:02:33.392581    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.092] Sep 21 11:02:38.395: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.092] Sep 21 11:02:39.398: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.092] Sep 21 11:02:40.400: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.093] Sep 21 11:02:41.403: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.093] Sep 21 11:02:42.405: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.093] Sep 21 11:02:43.408: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 29 lines ...
W0921 11:29:59.098] 
W0921 11:29:59.099]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.099]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.099]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.099]     1 loaded units listed.
W0921 11:29:59.099]     , kubelet-20220921T102832
W0921 11:29:59.099]     W0921 11:02:14.644492    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.099]     STEP: Starting the kubelet 09/21/22 11:02:14.654
W0921 11:29:59.100]     W0921 11:02:14.690773    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.100]     Sep 21 11:02:19.705: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.100]     Sep 21 11:02:20.709: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.101]     Sep 21 11:02:21.711: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.101]     Sep 21 11:02:22.714: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.101]     Sep 21 11:02:23.717: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.101]     Sep 21 11:02:24.720: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 12 lines ...
W0921 11:29:59.104] 
W0921 11:29:59.104]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.104]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.104]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.104]     1 loaded units listed.
W0921 11:29:59.104]     , kubelet-20220921T102832
W0921 11:29:59.104]     W0921 11:02:33.323937    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:33908->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.105]     STEP: Starting the kubelet 09/21/22 11:02:33.334
W0921 11:29:59.105]     W0921 11:02:33.392581    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.105]     Sep 21 11:02:38.395: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.105]     Sep 21 11:02:39.398: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.105]     Sep 21 11:02:40.400: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.106]     Sep 21 11:02:41.403: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.106]     Sep 21 11:02:42.405: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.106]     Sep 21 11:02:43.408: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 19 lines ...
W0921 11:29:59.109] STEP: Building a namespace api object, basename topology-manager-test 09/21/22 11:02:44.421
W0921 11:29:59.110] Sep 21 11:02:44.429: INFO: Skipping waiting for service account
W0921 11:29:59.110] [It] run Topology Manager policy test suite
W0921 11:29:59.110]   test/e2e_node/topology_manager_test.go:888
W0921 11:29:59.110] STEP: by configuring Topology Manager policy to single-numa-node 09/21/22 11:02:44.446
W0921 11:29:59.110] Sep 21 11:02:44.446: INFO: Configuring topology Manager policy to single-numa-node
W0921 11:29:59.111] Sep 21 11:02:44.446: INFO: failed to find any VF device from [{0000:00:00.0 -1 false false} {0000:00:01.0 -1 false false} {0000:00:01.3 -1 false false} {0000:00:03.0 -1 false false} {0000:00:04.0 -1 false false} {0000:00:05.0 -1 false false}]
W0921 11:29:59.112] Sep 21 11:02:44.447: INFO: New kubelet config is {{ } %!s(bool=true) /tmp/node-e2e-20220921T102832/static-pods3606252740 {1m0s} {10s} {20s}  map[] 0.0.0.0 %!s(int32=10250) %!s(int32=10255) /usr/libexec/kubernetes/kubelet-plugins/volume/exec/  /var/lib/kubelet/pki/kubelet.crt /var/lib/kubelet/pki/kubelet.key []  %!s(bool=false) %!s(bool=false) {{} {%!s(bool=false) {2m0s}} {%!s(bool=true)}} {AlwaysAllow {{5m0s} {30s}}} %!s(int32=5) %!s(int32=10) %!s(int32=5) %!s(int32=10) %!s(bool=true) %!s(bool=false) %!s(int32=10248) 127.0.0.1 %!s(int32=-999)  [] {4h0m0s} {10s} {5m0s} %!s(int32=40) {2m0s} %!s(int32=85) %!s(int32=80) {10s} /system.slice/kubelet.service  / %!s(bool=true) systemd static map[] {1s} None single-numa-node container map[] {2m0s} promiscuous-bridge %!s(int32=110) 10.100.0.0/24 %!s(int64=-1) /etc/resolv.conf %!s(bool=false) %!s(bool=true) {100ms} %!s(int64=1000000) %!s(int32=50) application/vnd.kubernetes.protobuf %!s(int32=5) %!s(int32=10) %!s(bool=false) map[memory.available:250Mi nodefs.available:10% nodefs.inodesFree:5%] map[] map[] {30s} %!s(int32=0) map[nodefs.available:5% nodefs.inodesFree:5%] %!s(int32=0) %!s(bool=true) %!s(bool=false) %!s(bool=true) %!s(int32=14) %!s(int32=15) map[CPUManager:%!s(bool=true) LocalStorageCapacityIsolation:%!s(bool=true) TopologyManager:%!s(bool=true)] %!s(bool=true) {} 10Mi %!s(int32=5) Watch [] %!s(bool=false) map[] map[cpu:200m]   [pods]   {text 5s %!s(v1.VerbosityLevel=4) [] {{%!s(bool=false) {{{%!s(int64=0) %!s(resource.Scale=0)} {%!s(*inf.Dec=<nil>)} 0 DecimalSI}}}}} %!s(bool=true) {0s} {0s} [] [] %!s(bool=true) %!s(bool=true) %!s(bool=false) %!s(*float64=0xc0012fe300) [] %!s(bool=true) %!s(*v1.TracingConfiguration=<nil>) %!s(bool=true)}
W0921 11:29:59.112] STEP: Stopping the kubelet 09/21/22 11:02:44.447
W0921 11:29:59.112] Sep 21 11:02:44.494: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
W0921 11:29:59.113]   kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
W0921 11:29:59.113] 
W0921 11:29:59.113] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.113] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.114] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.114] 1 loaded units listed.
W0921 11:29:59.114] , kubelet-20220921T102832
W0921 11:29:59.114] W0921 11:02:44.589943    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:45092->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.114] STEP: Starting the kubelet 09/21/22 11:02:44.599
W0921 11:29:59.115] W0921 11:02:44.655864    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.115] Sep 21 11:02:49.659: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.115] Sep 21 11:02:50.662: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.116] Sep 21 11:02:51.665: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.116] Sep 21 11:02:52.668: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.116] Sep 21 11:02:53.672: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.117] Sep 21 11:02:54.674: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 7 lines ...
W0921 11:29:59.118] 
W0921 11:29:59.118] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.119] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.119] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.119] 1 loaded units listed.
W0921 11:29:59.119] , kubelet-20220921T102832
W0921 11:29:59.119] W0921 11:02:55.828940    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:54950->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.119] STEP: Starting the kubelet 09/21/22 11:02:55.84
W0921 11:29:59.120] W0921 11:02:55.895082    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.120] Sep 21 11:03:00.901: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.120] Sep 21 11:03:01.904: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.121] Sep 21 11:03:02.907: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.121] Sep 21 11:03:03.909: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.121] Sep 21 11:03:04.913: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.122] Sep 21 11:03:05.916: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 19 lines ...
W0921 11:29:59.125]     STEP: Building a namespace api object, basename topology-manager-test 09/21/22 11:02:44.421
W0921 11:29:59.125]     Sep 21 11:02:44.429: INFO: Skipping waiting for service account
W0921 11:29:59.125]     [It] run Topology Manager policy test suite
W0921 11:29:59.125]       test/e2e_node/topology_manager_test.go:888
W0921 11:29:59.126]     STEP: by configuring Topology Manager policy to single-numa-node 09/21/22 11:02:44.446
W0921 11:29:59.126]     Sep 21 11:02:44.446: INFO: Configuring topology Manager policy to single-numa-node
W0921 11:29:59.126]     Sep 21 11:02:44.446: INFO: failed to find any VF device from [{0000:00:00.0 -1 false false} {0000:00:01.0 -1 false false} {0000:00:01.3 -1 false false} {0000:00:03.0 -1 false false} {0000:00:04.0 -1 false false} {0000:00:05.0 -1 false false}]
W0921 11:29:59.128]     Sep 21 11:02:44.447: INFO: New kubelet config is {{ } %!s(bool=true) /tmp/node-e2e-20220921T102832/static-pods3606252740 {1m0s} {10s} {20s}  map[] 0.0.0.0 %!s(int32=10250) %!s(int32=10255) /usr/libexec/kubernetes/kubelet-plugins/volume/exec/  /var/lib/kubelet/pki/kubelet.crt /var/lib/kubelet/pki/kubelet.key []  %!s(bool=false) %!s(bool=false) {{} {%!s(bool=false) {2m0s}} {%!s(bool=true)}} {AlwaysAllow {{5m0s} {30s}}} %!s(int32=5) %!s(int32=10) %!s(int32=5) %!s(int32=10) %!s(bool=true) %!s(bool=false) %!s(int32=10248) 127.0.0.1 %!s(int32=-999)  [] {4h0m0s} {10s} {5m0s} %!s(int32=40) {2m0s} %!s(int32=85) %!s(int32=80) {10s} /system.slice/kubelet.service  / %!s(bool=true) systemd static map[] {1s} None single-numa-node container map[] {2m0s} promiscuous-bridge %!s(int32=110) 10.100.0.0/24 %!s(int64=-1) /etc/resolv.conf %!s(bool=false) %!s(bool=true) {100ms} %!s(int64=1000000) %!s(int32=50) application/vnd.kubernetes.protobuf %!s(int32=5) %!s(int32=10) %!s(bool=false) map[memory.available:250Mi nodefs.available:10% nodefs.inodesFree:5%] map[] map[] {30s} %!s(int32=0) map[nodefs.available:5% nodefs.inodesFree:5%] %!s(int32=0) %!s(bool=true) %!s(bool=false) %!s(bool=true) %!s(int32=14) %!s(int32=15) map[CPUManager:%!s(bool=true) LocalStorageCapacityIsolation:%!s(bool=true) TopologyManager:%!s(bool=true)] %!s(bool=true) {} 10Mi %!s(int32=5) Watch [] %!s(bool=false) map[] map[cpu:200m]   [pods]   {text 5s %!s(v1.VerbosityLevel=4) [] {{%!s(bool=false) {{{%!s(int64=0) %!s(resource.Scale=0)} {%!s(*inf.Dec=<nil>)} 0 DecimalSI}}}}} %!s(bool=true) {0s} {0s} [] [] %!s(bool=true) %!s(bool=true) %!s(bool=false) %!s(*float64=0xc0012fe300) [] %!s(bool=true) %!s(*v1.TracingConfiguration=<nil>) %!s(bool=true)}
W0921 11:29:59.128]     STEP: Stopping the kubelet 09/21/22 11:02:44.447
W0921 11:29:59.128]     Sep 21 11:02:44.494: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
W0921 11:29:59.129]       kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
W0921 11:29:59.129] 
W0921 11:29:59.129]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.129]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.129]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.129]     1 loaded units listed.
W0921 11:29:59.129]     , kubelet-20220921T102832
W0921 11:29:59.130]     W0921 11:02:44.589943    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:45092->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.130]     STEP: Starting the kubelet 09/21/22 11:02:44.599
W0921 11:29:59.130]     W0921 11:02:44.655864    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.131]     Sep 21 11:02:49.659: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.131]     Sep 21 11:02:50.662: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.131]     Sep 21 11:02:51.665: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.132]     Sep 21 11:02:52.668: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.132]     Sep 21 11:02:53.672: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.132]     Sep 21 11:02:54.674: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 7 lines ...
W0921 11:29:59.134] 
W0921 11:29:59.134]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.134]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.135]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.135]     1 loaded units listed.
W0921 11:29:59.135]     , kubelet-20220921T102832
W0921 11:29:59.135]     W0921 11:02:55.828940    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:54950->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.135]     STEP: Starting the kubelet 09/21/22 11:02:55.84
W0921 11:29:59.136]     W0921 11:02:55.895082    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.136]     Sep 21 11:03:00.901: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.136]     Sep 21 11:03:01.904: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.137]     Sep 21 11:03:02.907: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.137]     Sep 21 11:03:03.909: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.137]     Sep 21 11:03:04.913: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.138]     Sep 21 11:03:05.916: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 26 lines ...
W0921 11:29:59.143] 
W0921 11:29:59.143] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.143] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.143] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.144] 1 loaded units listed.
W0921 11:29:59.144] , kubelet-20220921T102832
W0921 11:29:59.144] W0921 11:03:07.105960    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41618->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.144] STEP: Starting the kubelet 09/21/22 11:03:07.118
W0921 11:29:59.145] W0921 11:03:07.169607    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.145] Sep 21 11:03:12.175: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.145] Sep 21 11:03:13.178: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.146] Sep 21 11:03:14.181: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.146] Sep 21 11:03:15.184: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.146] Sep 21 11:03:16.187: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.147] Sep 21 11:03:17.189: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 23 lines ...
W0921 11:29:59.153] 
W0921 11:29:59.153] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.153] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.153] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.153] 1 loaded units listed.
W0921 11:29:59.154] , kubelet-20220921T102832
W0921 11:29:59.154] W0921 11:03:28.368970    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:52764->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.154] STEP: Starting the kubelet 09/21/22 11:03:28.378
W0921 11:29:59.154] W0921 11:03:28.431009    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.155] Sep 21 11:03:33.435: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.155] Sep 21 11:03:34.438: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.155] Sep 21 11:03:35.441: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.156] Sep 21 11:03:36.444: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.156] Sep 21 11:03:37.446: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.157] Sep 21 11:03:38.449: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 26 lines ...
W0921 11:29:59.162] 
W0921 11:29:59.162]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.162]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.163]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.163]     1 loaded units listed.
W0921 11:29:59.163]     , kubelet-20220921T102832
W0921 11:29:59.163]     W0921 11:03:07.105960    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41618->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.163]     STEP: Starting the kubelet 09/21/22 11:03:07.118
W0921 11:29:59.164]     W0921 11:03:07.169607    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.164]     Sep 21 11:03:12.175: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.164]     Sep 21 11:03:13.178: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.165]     Sep 21 11:03:14.181: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.165]     Sep 21 11:03:15.184: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.165]     Sep 21 11:03:16.187: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.166]     Sep 21 11:03:17.189: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 23 lines ...
W0921 11:29:59.171] 
W0921 11:29:59.171]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.171]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.172]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.172]     1 loaded units listed.
W0921 11:29:59.172]     , kubelet-20220921T102832
W0921 11:29:59.172]     W0921 11:03:28.368970    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:52764->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.172]     STEP: Starting the kubelet 09/21/22 11:03:28.378
W0921 11:29:59.173]     W0921 11:03:28.431009    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.173]     Sep 21 11:03:33.435: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.173]     Sep 21 11:03:34.438: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.174]     Sep 21 11:03:35.441: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.174]     Sep 21 11:03:36.444: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.174]     Sep 21 11:03:37.446: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.174]     Sep 21 11:03:38.449: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 13 lines ...
W0921 11:29:59.177] STEP: Creating a kubernetes client 09/21/22 11:03:39.461
W0921 11:29:59.177] STEP: Building a namespace api object, basename downward-api 09/21/22 11:03:39.461
W0921 11:29:59.177] Sep 21 11:03:39.467: INFO: Skipping waiting for service account
W0921 11:29:59.177] [It] should provide default limits.hugepages-<pagesize> from node allocatable
W0921 11:29:59.177]   test/e2e/common/node/downwardapi.go:348
W0921 11:29:59.177] STEP: Creating a pod to test downward api env vars 09/21/22 11:03:39.467
W0921 11:29:59.178] Sep 21 11:03:39.479: INFO: Waiting up to 5m0s for pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd" in namespace "downward-api-3631" to be "Succeeded or Failed"
W0921 11:29:59.178] Sep 21 11:03:39.481: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.620288ms
W0921 11:29:59.178] Sep 21 11:03:41.486: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006664779s
W0921 11:29:59.178] Sep 21 11:03:43.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003813026s
W0921 11:29:59.179] Sep 21 11:03:45.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.00359286s
W0921 11:29:59.179] STEP: Saw pod success 09/21/22 11:03:45.483
W0921 11:29:59.179] Sep 21 11:03:45.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd" satisfied condition "Succeeded or Failed"
W0921 11:29:59.179] Sep 21 11:03:45.487: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd container dapi-container: <nil>
W0921 11:29:59.179] STEP: delete the pod 09/21/22 11:03:45.498
W0921 11:29:59.180] Sep 21 11:03:45.502: INFO: Waiting for pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd to disappear
W0921 11:29:59.180] Sep 21 11:03:45.506: INFO: Pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd no longer exists
W0921 11:29:59.180] [DeferCleanup] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages]
W0921 11:29:59.180]   dump namespaces | framework.go:173
... skipping 16 lines ...
W0921 11:29:59.183]     STEP: Creating a kubernetes client 09/21/22 11:03:39.461
W0921 11:29:59.183]     STEP: Building a namespace api object, basename downward-api 09/21/22 11:03:39.461
W0921 11:29:59.183]     Sep 21 11:03:39.467: INFO: Skipping waiting for service account
W0921 11:29:59.184]     [It] should provide default limits.hugepages-<pagesize> from node allocatable
W0921 11:29:59.184]       test/e2e/common/node/downwardapi.go:348
W0921 11:29:59.184]     STEP: Creating a pod to test downward api env vars 09/21/22 11:03:39.467
W0921 11:29:59.184]     Sep 21 11:03:39.479: INFO: Waiting up to 5m0s for pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd" in namespace "downward-api-3631" to be "Succeeded or Failed"
W0921 11:29:59.184]     Sep 21 11:03:39.481: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.620288ms
W0921 11:29:59.185]     Sep 21 11:03:41.486: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006664779s
W0921 11:29:59.185]     Sep 21 11:03:43.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003813026s
W0921 11:29:59.185]     Sep 21 11:03:45.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.00359286s
W0921 11:29:59.185]     STEP: Saw pod success 09/21/22 11:03:45.483
W0921 11:29:59.185]     Sep 21 11:03:45.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd" satisfied condition "Succeeded or Failed"
W0921 11:29:59.186]     Sep 21 11:03:45.487: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd container dapi-container: <nil>
W0921 11:29:59.186]     STEP: delete the pod 09/21/22 11:03:45.498
W0921 11:29:59.186]     Sep 21 11:03:45.502: INFO: Waiting for pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd to disappear
W0921 11:29:59.186]     Sep 21 11:03:45.506: INFO: Pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd no longer exists
W0921 11:29:59.186]     [DeferCleanup] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages]
W0921 11:29:59.186]       dump namespaces | framework.go:173
... skipping 654 lines ...
W0921 11:29:59.299] 
W0921 11:29:59.300] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.300] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.300] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.300] 1 loaded units listed.
W0921 11:29:59.300] , kubelet-20220921T102832
W0921 11:29:59.300] W0921 11:08:29.009976    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:48704->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.301] STEP: Starting the kubelet 09/21/22 11:08:29.023
W0921 11:29:59.301] W0921 11:08:29.080157    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.301] Sep 21 11:08:34.087: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.301] Sep 21 11:08:35.090: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.302] Sep 21 11:08:36.093: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.302] Sep 21 11:08:37.095: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.302] Sep 21 11:08:38.098: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.303] Sep 21 11:08:39.100: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.303] [It] should use unconfined when specified
W0921 11:29:59.303]   test/e2e_node/seccompdefault_test.go:66
W0921 11:29:59.303] STEP: Creating a pod to test SeccompDefault-unconfined 09/21/22 11:08:40.104
W0921 11:29:59.303] Sep 21 11:08:40.112: INFO: Waiting up to 5m0s for pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e" in namespace "seccompdefault-test-2197" to be "Succeeded or Failed"
W0921 11:29:59.303] Sep 21 11:08:40.118: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.678486ms
W0921 11:29:59.304] Sep 21 11:08:42.120: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008054237s
W0921 11:29:59.304] Sep 21 11:08:44.122: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009520607s
W0921 11:29:59.304] Sep 21 11:08:46.121: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009173357s
W0921 11:29:59.304] STEP: Saw pod success 09/21/22 11:08:46.121
W0921 11:29:59.304] Sep 21 11:08:46.121: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e" satisfied condition "Succeeded or Failed"
W0921 11:29:59.305] Sep 21 11:08:46.123: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e container seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e: <nil>
W0921 11:29:59.305] STEP: delete the pod 09/21/22 11:08:46.135
W0921 11:29:59.305] Sep 21 11:08:46.139: INFO: Waiting for pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e to disappear
W0921 11:29:59.305] Sep 21 11:08:46.143: INFO: Pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e no longer exists
W0921 11:29:59.305] [AfterEach] with SeccompDefault enabled
W0921 11:29:59.305]   test/e2e_node/util.go:181
... skipping 3 lines ...
W0921 11:29:59.306] 
W0921 11:29:59.307] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.307] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.307] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.307] 1 loaded units listed.
W0921 11:29:59.307] , kubelet-20220921T102832
W0921 11:29:59.307] W0921 11:08:46.297142    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:35890->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.308] STEP: Starting the kubelet 09/21/22 11:08:46.308
W0921 11:29:59.308] W0921 11:08:46.367053    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.308] Sep 21 11:08:51.373: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.308] Sep 21 11:08:52.377: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.309] Sep 21 11:08:53.379: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.309] Sep 21 11:08:54.383: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.309] Sep 21 11:08:55.386: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.309] Sep 21 11:08:56.389: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 27 lines ...
W0921 11:29:59.314] 
W0921 11:29:59.314]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.314]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.315]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.315]     1 loaded units listed.
W0921 11:29:59.315]     , kubelet-20220921T102832
W0921 11:29:59.315]     W0921 11:08:29.009976    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:48704->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.315]     STEP: Starting the kubelet 09/21/22 11:08:29.023
W0921 11:29:59.315]     W0921 11:08:29.080157    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.316]     Sep 21 11:08:34.087: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.316]     Sep 21 11:08:35.090: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.316]     Sep 21 11:08:36.093: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.316]     Sep 21 11:08:37.095: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.317]     Sep 21 11:08:38.098: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.317]     Sep 21 11:08:39.100: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.317]     [It] should use unconfined when specified
W0921 11:29:59.317]       test/e2e_node/seccompdefault_test.go:66
W0921 11:29:59.317]     STEP: Creating a pod to test SeccompDefault-unconfined 09/21/22 11:08:40.104
W0921 11:29:59.318]     Sep 21 11:08:40.112: INFO: Waiting up to 5m0s for pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e" in namespace "seccompdefault-test-2197" to be "Succeeded or Failed"
W0921 11:29:59.318]     Sep 21 11:08:40.118: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.678486ms
W0921 11:29:59.318]     Sep 21 11:08:42.120: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008054237s
W0921 11:29:59.318]     Sep 21 11:08:44.122: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009520607s
W0921 11:29:59.318]     Sep 21 11:08:46.121: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009173357s
W0921 11:29:59.319]     STEP: Saw pod success 09/21/22 11:08:46.121
W0921 11:29:59.319]     Sep 21 11:08:46.121: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e" satisfied condition "Succeeded or Failed"
W0921 11:29:59.319]     Sep 21 11:08:46.123: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e container seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e: <nil>
W0921 11:29:59.319]     STEP: delete the pod 09/21/22 11:08:46.135
W0921 11:29:59.319]     Sep 21 11:08:46.139: INFO: Waiting for pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e to disappear
W0921 11:29:59.320]     Sep 21 11:08:46.143: INFO: Pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e no longer exists
W0921 11:29:59.320]     [AfterEach] with SeccompDefault enabled
W0921 11:29:59.320]       test/e2e_node/util.go:181
... skipping 3 lines ...
W0921 11:29:59.321] 
W0921 11:29:59.321]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.321]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.321]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.322]     1 loaded units listed.
W0921 11:29:59.322]     , kubelet-20220921T102832
W0921 11:29:59.322]     W0921 11:08:46.297142    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:35890->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.322]     STEP: Starting the kubelet 09/21/22 11:08:46.308
W0921 11:29:59.322]     W0921 11:08:46.367053    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.323]     Sep 21 11:08:51.373: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.323]     Sep 21 11:08:52.377: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.323]     Sep 21 11:08:53.379: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.323]     Sep 21 11:08:54.383: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.324]     Sep 21 11:08:55.386: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.324]     Sep 21 11:08:56.389: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 73 lines ...
W0921 11:29:59.335] 
W0921 11:29:59.335] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.335] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.336] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.336] 1 loaded units listed.
W0921 11:29:59.336] , kubelet-20220921T102832
W0921 11:29:59.336] W0921 11:08:57.649936    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:54338->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.336] STEP: Starting the kubelet 09/21/22 11:08:57.659
W0921 11:29:59.336] W0921 11:08:57.716252    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.337] Sep 21 11:09:02.722: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.337] Sep 21 11:09:03.725: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.337] Sep 21 11:09:04.728: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.338] Sep 21 11:09:05.731: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.338] Sep 21 11:09:06.734: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.338] Sep 21 11:09:07.737: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 63 lines ...
W0921 11:29:59.349] 
W0921 11:29:59.349] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.349] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.349] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.350] 1 loaded units listed.
W0921 11:29:59.350] , kubelet-20220921T102832
W0921 11:29:59.350] W0921 11:09:46.957935    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41174->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.350] STEP: Starting the kubelet 09/21/22 11:09:46.967
W0921 11:29:59.350] W0921 11:09:47.029293    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.351] Sep 21 11:09:52.032: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.351] Sep 21 11:09:53.035: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.351] Sep 21 11:09:54.039: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.351] Sep 21 11:09:55.042: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.352] Sep 21 11:09:56.045: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.352] Sep 21 11:09:57.047: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 32 lines ...
W0921 11:29:59.357] 
W0921 11:29:59.357]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.357]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.357]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.357]     1 loaded units listed.
W0921 11:29:59.357]     , kubelet-20220921T102832
W0921 11:29:59.358]     W0921 11:08:57.649936    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:54338->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.358]     STEP: Starting the kubelet 09/21/22 11:08:57.659
W0921 11:29:59.358]     W0921 11:08:57.716252    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.358]     Sep 21 11:09:02.722: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.359]     Sep 21 11:09:03.725: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.359]     Sep 21 11:09:04.728: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.359]     Sep 21 11:09:05.731: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.360]     Sep 21 11:09:06.734: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.360]     Sep 21 11:09:07.737: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 63 lines ...
W0921 11:29:59.371] 
W0921 11:29:59.371]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.371]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.371]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.371]     1 loaded units listed.
W0921 11:29:59.371]     , kubelet-20220921T102832
W0921 11:29:59.372]     W0921 11:09:46.957935    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41174->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.372]     STEP: Starting the kubelet 09/21/22 11:09:46.967
W0921 11:29:59.372]     W0921 11:09:47.029293    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.372]     Sep 21 11:09:52.032: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.373]     Sep 21 11:09:53.035: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.373]     Sep 21 11:09:54.039: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.373]     Sep 21 11:09:55.042: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.373]     Sep 21 11:09:56.045: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.374]     Sep 21 11:09:57.047: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 23 lines ...
W0921 11:29:59.378] 
W0921 11:29:59.378] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.378] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.378] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.378] 1 loaded units listed.
W0921 11:29:59.378] , kubelet-20220921T102832
W0921 11:29:59.379] W0921 11:09:58.227233    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58892->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.379] STEP: Starting the kubelet 09/21/22 11:09:58.236
W0921 11:29:59.379] W0921 11:09:58.294935    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.379] Sep 21 11:10:03.298: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.380] Sep 21 11:10:04.301: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.380] Sep 21 11:10:05.304: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.380] Sep 21 11:10:06.307: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.380] Sep 21 11:10:07.309: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.381] Sep 21 11:10:08.312: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.381] [It] a pod failing to mount volumes and without init containers should report scheduled and initialized conditions set
W0921 11:29:59.381]   test/e2e_node/pod_conditions_test.go:58
W0921 11:29:59.381] STEP: creating a pod whose sandbox creation is blocked due to a missing volume 09/21/22 11:10:09.314
W0921 11:29:59.381] STEP: waiting until kubelet has started trying to set up the pod and started to fail 09/21/22 11:10:09.322
W0921 11:29:59.381] STEP: checking pod condition for a pod whose sandbox creation is blocked 09/21/22 11:10:11.332
W0921 11:29:59.382] [AfterEach] including PodHasNetwork condition [Serial] [Feature:PodHasNetwork]
W0921 11:29:59.382]   test/e2e_node/util.go:181
W0921 11:29:59.382] STEP: Stopping the kubelet 09/21/22 11:10:11.333
W0921 11:29:59.382] Sep 21 11:10:11.383: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
W0921 11:29:59.382]   kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
W0921 11:29:59.382] 
W0921 11:29:59.383] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.383] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.383] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.383] 1 loaded units listed.
W0921 11:29:59.383] , kubelet-20220921T102832
W0921 11:29:59.383] W0921 11:10:11.494969    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58824->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.383] STEP: Starting the kubelet 09/21/22 11:10:11.507
W0921 11:29:59.384] W0921 11:10:11.565399    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.384] Sep 21 11:10:16.569: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.384] Sep 21 11:10:17.572: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.384] Sep 21 11:10:18.575: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.385] Sep 21 11:10:19.578: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.385] Sep 21 11:10:20.581: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.385] Sep 21 11:10:21.584: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 26 lines ...
W0921 11:29:59.389] 
W0921 11:29:59.389]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.389]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.390]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.390]     1 loaded units listed.
W0921 11:29:59.390]     , kubelet-20220921T102832
W0921 11:29:59.390]     W0921 11:09:58.227233    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58892->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.390]     STEP: Starting the kubelet 09/21/22 11:09:58.236
W0921 11:29:59.390]     W0921 11:09:58.294935    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.391]     Sep 21 11:10:03.298: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.391]     Sep 21 11:10:04.301: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.391]     Sep 21 11:10:05.304: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.392]     Sep 21 11:10:06.307: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.392]     Sep 21 11:10:07.309: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.392]     Sep 21 11:10:08.312: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.393]     [It] a pod failing to mount volumes and without init containers should report scheduled and initialized conditions set
W0921 11:29:59.393]       test/e2e_node/pod_conditions_test.go:58
W0921 11:29:59.393]     STEP: creating a pod whose sandbox creation is blocked due to a missing volume 09/21/22 11:10:09.314
W0921 11:29:59.393]     STEP: waiting until kubelet has started trying to set up the pod and started to fail 09/21/22 11:10:09.322
W0921 11:29:59.394]     STEP: checking pod condition for a pod whose sandbox creation is blocked 09/21/22 11:10:11.332
W0921 11:29:59.394]     [AfterEach] including PodHasNetwork condition [Serial] [Feature:PodHasNetwork]
W0921 11:29:59.394]       test/e2e_node/util.go:181
W0921 11:29:59.394]     STEP: Stopping the kubelet 09/21/22 11:10:11.333
W0921 11:29:59.394]     Sep 21 11:10:11.383: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
W0921 11:29:59.395]       kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
W0921 11:29:59.395] 
W0921 11:29:59.395]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.395]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.396]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.396]     1 loaded units listed.
W0921 11:29:59.396]     , kubelet-20220921T102832
W0921 11:29:59.396]     W0921 11:10:11.494969    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58824->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.396]     STEP: Starting the kubelet 09/21/22 11:10:11.507
W0921 11:29:59.396]     W0921 11:10:11.565399    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.397]     Sep 21 11:10:16.569: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.397]     Sep 21 11:10:17.572: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.397]     Sep 21 11:10:18.575: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.397]     Sep 21 11:10:19.578: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.398]     Sep 21 11:10:20.581: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.398]     Sep 21 11:10:21.584: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 26 lines ...
W0921 11:29:59.403] 
W0921 11:29:59.403] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.403] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.403] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.403] 1 loaded units listed.
W0921 11:29:59.403] , kubelet-20220921T102832
W0921 11:29:59.404] W0921 11:10:22.773961    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:36804->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.404] STEP: Starting the kubelet 09/21/22 11:10:22.783
W0921 11:29:59.404] W0921 11:10:22.839164    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.404] Sep 21 11:10:27.842: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.405] Sep 21 11:10:28.845: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.405] Sep 21 11:10:29.848: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.405] Sep 21 11:10:30.851: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.406] Sep 21 11:10:31.854: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.406] Sep 21 11:10:32.856: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 29 lines ...
W0921 11:29:59.411] 
W0921 11:29:59.411]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.411]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.411]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.412]     1 loaded units listed.
W0921 11:29:59.412]     , kubelet-20220921T102832
W0921 11:29:59.412]     W0921 11:10:22.773961    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:36804->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.412]     STEP: Starting the kubelet 09/21/22 11:10:22.783
W0921 11:29:59.412]     W0921 11:10:22.839164    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.413]     Sep 21 11:10:27.842: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.413]     Sep 21 11:10:28.845: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.413]     Sep 21 11:10:29.848: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.413]     Sep 21 11:10:30.851: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.414]     Sep 21 11:10:31.854: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.414]     Sep 21 11:10:32.856: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 30 lines ...
W0921 11:29:59.419] 
W0921 11:29:59.419] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.419] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.419] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.420] 1 loaded units listed.
W0921 11:29:59.420] , kubelet-20220921T102832
W0921 11:29:59.420] W0921 11:10:34.073133    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:56454->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.420] STEP: Starting the kubelet 09/21/22 11:10:34.082
W0921 11:29:59.420] W0921 11:10:34.143071    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.421] Sep 21 11:10:39.147: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.421] Sep 21 11:10:40.151: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.421] Sep 21 11:10:41.154: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.422] Sep 21 11:10:42.157: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.422] Sep 21 11:10:43.160: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.422] Sep 21 11:10:44.163: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 17 lines ...
W0921 11:29:59.425] 
W0921 11:29:59.425] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.426] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.426] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.426] 1 loaded units listed.
W0921 11:29:59.426] , kubelet-20220921T102832
W0921 11:29:59.426] W0921 11:10:45.341953    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:34048->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.426] STEP: Starting the kubelet 09/21/22 11:10:45.353
W0921 11:29:59.427] W0921 11:10:45.404570    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.427] Sep 21 11:10:50.409: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.427] Sep 21 11:10:51.412: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.428] Sep 21 11:10:52.414: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.428] Sep 21 11:10:53.417: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.428] Sep 21 11:10:54.420: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.428] Sep 21 11:10:55.423: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 32 lines ...
W0921 11:29:59.434] 
W0921 11:29:59.434]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.435]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.435]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.435]     1 loaded units listed.
W0921 11:29:59.435]     , kubelet-20220921T102832
W0921 11:29:59.435]     W0921 11:10:34.073133    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:56454->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.435]     STEP: Starting the kubelet 09/21/22 11:10:34.082
W0921 11:29:59.436]     W0921 11:10:34.143071    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.436]     Sep 21 11:10:39.147: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.436]     Sep 21 11:10:40.151: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.437]     Sep 21 11:10:41.154: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.437]     Sep 21 11:10:42.157: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.437]     Sep 21 11:10:43.160: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.438]     Sep 21 11:10:44.163: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 17 lines ...
W0921 11:29:59.441] 
W0921 11:29:59.441]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.441]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.441]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.441]     1 loaded units listed.
W0921 11:29:59.441]     , kubelet-20220921T102832
W0921 11:29:59.442]     W0921 11:10:45.341953    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:34048->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.442]     STEP: Starting the kubelet 09/21/22 11:10:45.353
W0921 11:29:59.442]     W0921 11:10:45.404570    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.442]     Sep 21 11:10:50.409: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.443]     Sep 21 11:10:51.412: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.443]     Sep 21 11:10:52.414: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.443]     Sep 21 11:10:53.417: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.444]     Sep 21 11:10:54.420: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.444]     Sep 21 11:10:55.423: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 31 lines ...
W0921 11:29:59.450] 
W0921 11:29:59.451] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.451] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.451] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.451] 1 loaded units listed.
W0921 11:29:59.451] , kubelet-20220921T102832
W0921 11:29:59.452] W0921 11:10:56.615953    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:46580->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.452] STEP: Starting the kubelet 09/21/22 11:10:56.627
W0921 11:29:59.452] W0921 11:10:56.679211    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.452] Sep 21 11:11:01.727: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.453] Sep 21 11:11:02.730: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.453] Sep 21 11:11:03.733: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.453] Sep 21 11:11:04.735: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.454] Sep 21 11:11:05.739: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.454] Sep 21 11:11:06.741: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 31 lines ...
W0921 11:29:59.460] 
W0921 11:29:59.460]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.460]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.460]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.460]     1 loaded units listed.
W0921 11:29:59.460]     , kubelet-20220921T102832
W0921 11:29:59.461]     W0921 11:10:56.615953    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:46580->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.461]     STEP: Starting the kubelet 09/21/22 11:10:56.627
W0921 11:29:59.461]     W0921 11:10:56.679211    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.461]     Sep 21 11:11:01.727: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.462]     Sep 21 11:11:02.730: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.462]     Sep 21 11:11:03.733: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.462]     Sep 21 11:11:04.735: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.463]     Sep 21 11:11:05.739: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.463]     Sep 21 11:11:06.741: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 18 lines ...
W0921 11:29:59.466] STEP: Creating a kubernetes client 09/21/22 11:11:07.753
W0921 11:29:59.466] STEP: Building a namespace api object, basename downward-api 09/21/22 11:11:07.753
W0921 11:29:59.466] Sep 21 11:11:07.760: INFO: Skipping waiting for service account
W0921 11:29:59.466] [It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
W0921 11:29:59.466]   test/e2e/common/storage/downwardapi.go:38
W0921 11:29:59.467] STEP: Creating a pod to test downward api env vars 09/21/22 11:11:07.76
W0921 11:29:59.467] Sep 21 11:11:07.768: INFO: Waiting up to 5m0s for pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963" in namespace "downward-api-7241" to be "Succeeded or Failed"
W0921 11:29:59.467] Sep 21 11:11:07.770: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 1.611778ms
W0921 11:29:59.467] Sep 21 11:11:09.772: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003973964s
W0921 11:29:59.468] Sep 21 11:11:11.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004243115s
W0921 11:29:59.468] Sep 21 11:11:13.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.00491682s
W0921 11:29:59.468] STEP: Saw pod success 09/21/22 11:11:13.773
W0921 11:29:59.468] Sep 21 11:11:13.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963" satisfied condition "Succeeded or Failed"
W0921 11:29:59.469] Sep 21 11:11:13.775: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 container dapi-container: <nil>
W0921 11:29:59.469] STEP: delete the pod 09/21/22 11:11:13.789
W0921 11:29:59.469] Sep 21 11:11:13.792: INFO: Waiting for pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 to disappear
W0921 11:29:59.469] Sep 21 11:11:13.797: INFO: Pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 no longer exists
W0921 11:29:59.469] [DeferCleanup] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage]
W0921 11:29:59.470]   dump namespaces | framework.go:173
... skipping 16 lines ...
W0921 11:29:59.472]     STEP: Creating a kubernetes client 09/21/22 11:11:07.753
W0921 11:29:59.473]     STEP: Building a namespace api object, basename downward-api 09/21/22 11:11:07.753
W0921 11:29:59.473]     Sep 21 11:11:07.760: INFO: Skipping waiting for service account
W0921 11:29:59.473]     [It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
W0921 11:29:59.473]       test/e2e/common/storage/downwardapi.go:38
W0921 11:29:59.473]     STEP: Creating a pod to test downward api env vars 09/21/22 11:11:07.76
W0921 11:29:59.473]     Sep 21 11:11:07.768: INFO: Waiting up to 5m0s for pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963" in namespace "downward-api-7241" to be "Succeeded or Failed"
W0921 11:29:59.474]     Sep 21 11:11:07.770: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 1.611778ms
W0921 11:29:59.474]     Sep 21 11:11:09.772: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003973964s
W0921 11:29:59.474]     Sep 21 11:11:11.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004243115s
W0921 11:29:59.474]     Sep 21 11:11:13.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.00491682s
W0921 11:29:59.474]     STEP: Saw pod success 09/21/22 11:11:13.773
W0921 11:29:59.475]     Sep 21 11:11:13.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963" satisfied condition "Succeeded or Failed"
W0921 11:29:59.475]     Sep 21 11:11:13.775: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 container dapi-container: <nil>
W0921 11:29:59.475]     STEP: delete the pod 09/21/22 11:11:13.789
W0921 11:29:59.475]     Sep 21 11:11:13.792: INFO: Waiting for pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 to disappear
W0921 11:29:59.476]     Sep 21 11:11:13.797: INFO: Pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 no longer exists
W0921 11:29:59.476]     [DeferCleanup] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage]
W0921 11:29:59.476]       dump namespaces | framework.go:173
... skipping 517 lines ...
W0921 11:29:59.577]     STEP: Destroying namespace "node-label-reconciliation-5927" for this suite. 09/21/22 11:13:49.748
W0921 11:29:59.577]   << End Captured GinkgoWriter Output
W0921 11:29:59.577] ------------------------------
W0921 11:29:59.577] SS
W0921 11:29:59.577] ------------------------------
W0921 11:29:59.578] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] without SRIOV devices in the system with disabled KubeletPodResourcesGetAllocatable feature gate
W0921 11:29:59.578]   should return the expected error with the feature gate disabled
W0921 11:29:59.578]   test/e2e_node/podresources_test.go:712
W0921 11:29:59.578] [BeforeEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
W0921 11:29:59.578]   set up framework | framework.go:158
W0921 11:29:59.579] STEP: Creating a kubernetes client 09/21/22 11:13:49.755
W0921 11:29:59.579] STEP: Building a namespace api object, basename podresources-test 09/21/22 11:13:49.755
W0921 11:29:59.579] Sep 21 11:13:49.763: INFO: Skipping waiting for service account
... skipping 7 lines ...
W0921 11:29:59.581] 
W0921 11:29:59.581] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.581] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.581] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.581] 1 loaded units listed.
W0921 11:29:59.581] , kubelet-20220921T102832
W0921 11:29:59.582] W0921 11:13:49.944001    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58570->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.582] STEP: Starting the kubelet 09/21/22 11:13:49.953
W0921 11:29:59.582] W0921 11:13:50.016733    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.582] Sep 21 11:13:55.023: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.583] Sep 21 11:13:56.026: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.583] Sep 21 11:13:57.029: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.584] Sep 21 11:13:58.032: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.584] Sep 21 11:13:59.035: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.584] Sep 21 11:14:00.038: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.585] [It] should return the expected error with the feature gate disabled
W0921 11:29:59.585]   test/e2e_node/podresources_test.go:712
W0921 11:29:59.585] STEP: checking GetAllocatableResources fail if the feature gate is not enabled 09/21/22 11:14:01.041
W0921 11:29:59.585] Sep 21 11:14:01.044: INFO: GetAllocatableResources result: nil, err: rpc error: code = Unknown desc = Pod Resources API GetAllocatableResources disabled
W0921 11:29:59.585] [AfterEach] with disabled KubeletPodResourcesGetAllocatable feature gate
W0921 11:29:59.586]   test/e2e_node/util.go:181
W0921 11:29:59.586] STEP: Stopping the kubelet 09/21/22 11:14:01.045
W0921 11:29:59.586] Sep 21 11:14:01.091: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
W0921 11:29:59.587]   kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
W0921 11:29:59.587] 
W0921 11:29:59.587] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.587] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.587] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.587] 1 loaded units listed.
W0921 11:29:59.588] , kubelet-20220921T102832
W0921 11:29:59.588] W0921 11:14:01.192942    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:57372->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.588] STEP: Starting the kubelet 09/21/22 11:14:01.204
W0921 11:29:59.588] W0921 11:14:01.261080    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.589] Sep 21 11:14:06.268: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.589] Sep 21 11:14:07.270: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.589] Sep 21 11:14:08.273: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.590] Sep 21 11:14:09.276: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.590] Sep 21 11:14:10.278: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.590] Sep 21 11:14:11.282: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 8 lines ...
W0921 11:29:59.592] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
W0921 11:29:59.593] test/e2e_node/framework.go:23
W0921 11:29:59.593]   without SRIOV devices in the system
W0921 11:29:59.593]   test/e2e_node/podresources_test.go:643
W0921 11:29:59.593]     with disabled KubeletPodResourcesGetAllocatable feature gate
W0921 11:29:59.593]     test/e2e_node/podresources_test.go:704
W0921 11:29:59.593]       should return the expected error with the feature gate disabled
W0921 11:29:59.593]       test/e2e_node/podresources_test.go:712
W0921 11:29:59.594] 
W0921 11:29:59.594]   Begin Captured GinkgoWriter Output >>
W0921 11:29:59.594]     [BeforeEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
W0921 11:29:59.594]       set up framework | framework.go:158
W0921 11:29:59.594]     STEP: Creating a kubernetes client 09/21/22 11:13:49.755
... skipping 9 lines ...
W0921 11:29:59.596] 
W0921 11:29:59.596]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.597]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.597]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.597]     1 loaded units listed.
W0921 11:29:59.597]     , kubelet-20220921T102832
W0921 11:29:59.598]     W0921 11:13:49.944001    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58570->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.598]     STEP: Starting the kubelet 09/21/22 11:13:49.953
W0921 11:29:59.598]     W0921 11:13:50.016733    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.598]     Sep 21 11:13:55.023: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.599]     Sep 21 11:13:56.026: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.599]     Sep 21 11:13:57.029: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.599]     Sep 21 11:13:58.032: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.600]     Sep 21 11:13:59.035: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.600]     Sep 21 11:14:00.038: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0921 11:29:59.600]     [It] should return the expected error with the feature gate disabled
W0921 11:29:59.600]       test/e2e_node/podresources_test.go:712
W0921 11:29:59.601]     STEP: checking GetAllocatableResources fail if the feature gate is not enabled 09/21/22 11:14:01.041
W0921 11:29:59.601]     Sep 21 11:14:01.044: INFO: GetAllocatableResources result: nil, err: rpc error: code = Unknown desc = Pod Resources API GetAllocatableResources disabled
W0921 11:29:59.601]     [AfterEach] with disabled KubeletPodResourcesGetAllocatable feature gate
W0921 11:29:59.601]       test/e2e_node/util.go:181
W0921 11:29:59.601]     STEP: Stopping the kubelet 09/21/22 11:14:01.045
W0921 11:29:59.601]     Sep 21 11:14:01.091: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
W0921 11:29:59.602]       kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
W0921 11:29:59.602] 
W0921 11:29:59.602]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.603]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.603]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.603]     1 loaded units listed.
W0921 11:29:59.603]     , kubelet-20220921T102832
W0921 11:29:59.604]     W0921 11:14:01.192942    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:57372->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.604]     STEP: Starting the kubelet 09/21/22 11:14:01.204
W0921 11:29:59.604]     W0921 11:14:01.261080    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.604]     Sep 21 11:14:06.268: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.605]     Sep 21 11:14:07.270: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.605]     Sep 21 11:14:08.273: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.605]     Sep 21 11:14:09.276: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.606]     Sep 21 11:14:10.278: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.606]     Sep 21 11:14:11.282: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 50 lines ...
W0921 11:29:59.616] STEP: Wait for 0 temp events generated 09/21/22 11:14:28.32
W0921 11:29:59.616] STEP: Wait for 0 total events generated 09/21/22 11:14:28.332
W0921 11:29:59.617] STEP: Make sure only 0 total events generated 09/21/22 11:14:28.341
W0921 11:29:59.617] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:33.341
W0921 11:29:59.617] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:33.344
W0921 11:29:59.617] STEP: should not generate events for too old log 09/21/22 11:14:38.344
W0921 11:29:59.617] STEP: Inject 3 logs: "temporary error" 09/21/22 11:14:38.344
W0921 11:29:59.618] STEP: Wait for 0 temp events generated 09/21/22 11:14:38.345
W0921 11:29:59.618] STEP: Wait for 0 total events generated 09/21/22 11:14:38.354
W0921 11:29:59.618] STEP: Make sure only 0 total events generated 09/21/22 11:14:38.362
W0921 11:29:59.618] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:43.362
W0921 11:29:59.618] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:43.365
W0921 11:29:59.618] STEP: should not change node condition for too old log 09/21/22 11:14:48.365
W0921 11:29:59.618] STEP: Inject 1 logs: "permanent error 1" 09/21/22 11:14:48.365
W0921 11:29:59.619] STEP: Wait for 0 temp events generated 09/21/22 11:14:48.365
W0921 11:29:59.619] STEP: Wait for 0 total events generated 09/21/22 11:14:48.374
W0921 11:29:59.619] STEP: Make sure only 0 total events generated 09/21/22 11:14:48.381
W0921 11:29:59.619] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:53.382
W0921 11:29:59.619] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:53.384
W0921 11:29:59.620] STEP: should generate event for old log within lookback duration 09/21/22 11:14:58.384
W0921 11:29:59.620] STEP: Inject 3 logs: "temporary error" 09/21/22 11:14:58.384
W0921 11:29:59.620] STEP: Wait for 3 temp events generated 09/21/22 11:14:58.385
W0921 11:29:59.620] STEP: Wait for 3 total events generated 09/21/22 11:14:59.403
W0921 11:29:59.620] STEP: Make sure only 3 total events generated 09/21/22 11:14:59.412
W0921 11:29:59.620] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:04.412
W0921 11:29:59.621] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:04.418
W0921 11:29:59.621] STEP: should change node condition for old log within lookback duration 09/21/22 11:15:09.418
W0921 11:29:59.621] STEP: Inject 1 logs: "permanent error 1" 09/21/22 11:15:09.418
W0921 11:29:59.621] STEP: Wait for 3 temp events generated 09/21/22 11:15:09.419
W0921 11:29:59.621] STEP: Wait for 4 total events generated 09/21/22 11:15:09.428
W0921 11:29:59.622] STEP: Make sure only 4 total events generated 09/21/22 11:15:10.454
W0921 11:29:59.622] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:15.454
W0921 11:29:59.622] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:15.457
W0921 11:29:59.622] STEP: should generate event for new log 09/21/22 11:15:20.457
W0921 11:29:59.622] STEP: Inject 3 logs: "temporary error" 09/21/22 11:15:20.457
W0921 11:29:59.622] STEP: Wait for 6 temp events generated 09/21/22 11:15:20.458
W0921 11:29:59.622] STEP: Wait for 7 total events generated 09/21/22 11:15:21.475
W0921 11:29:59.623] STEP: Make sure only 7 total events generated 09/21/22 11:15:21.484
W0921 11:29:59.623] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:26.484
W0921 11:29:59.623] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:26.487
W0921 11:29:59.623] STEP: should not update node condition with the same reason 09/21/22 11:15:31.487
W0921 11:29:59.623] STEP: Inject 1 logs: "permanent error 1different message" 09/21/22 11:15:31.488
W0921 11:29:59.623] STEP: Wait for 6 temp events generated 09/21/22 11:15:31.488
W0921 11:29:59.624] STEP: Wait for 7 total events generated 09/21/22 11:15:31.497
W0921 11:29:59.624] STEP: Make sure only 7 total events generated 09/21/22 11:15:31.503
W0921 11:29:59.624] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:36.503
W0921 11:29:59.624] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:36.506
W0921 11:29:59.624] STEP: should change node condition for new log 09/21/22 11:15:41.506
W0921 11:29:59.624] STEP: Inject 1 logs: "permanent error 2" 09/21/22 11:15:41.506
W0921 11:29:59.625] STEP: Wait for 6 temp events generated 09/21/22 11:15:41.507
W0921 11:29:59.625] STEP: Wait for 8 total events generated 09/21/22 11:15:41.515
W0921 11:29:59.625] STEP: Make sure only 8 total events generated 09/21/22 11:15:42.534
W0921 11:29:59.625] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:47.534
W0921 11:29:59.625] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:47.536
W0921 11:29:59.625] [AfterEach] SystemLogMonitor
... skipping 61 lines ...
W0921 11:29:59.637]     STEP: Wait for 0 temp events generated 09/21/22 11:14:28.32
W0921 11:29:59.637]     STEP: Wait for 0 total events generated 09/21/22 11:14:28.332
W0921 11:29:59.638]     STEP: Make sure only 0 total events generated 09/21/22 11:14:28.341
W0921 11:29:59.638]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:33.341
W0921 11:29:59.638]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:33.344
W0921 11:29:59.638]     STEP: should not generate events for too old log 09/21/22 11:14:38.344
W0921 11:29:59.638]     STEP: Inject 3 logs: "temporary error" 09/21/22 11:14:38.344
W0921 11:29:59.639]     STEP: Wait for 0 temp events generated 09/21/22 11:14:38.345
W0921 11:29:59.639]     STEP: Wait for 0 total events generated 09/21/22 11:14:38.354
W0921 11:29:59.639]     STEP: Make sure only 0 total events generated 09/21/22 11:14:38.362
W0921 11:29:59.639]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:43.362
W0921 11:29:59.639]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:43.365
W0921 11:29:59.640]     STEP: should not change node condition for too old log 09/21/22 11:14:48.365
W0921 11:29:59.640]     STEP: Inject 1 logs: "permanent error 1" 09/21/22 11:14:48.365
W0921 11:29:59.640]     STEP: Wait for 0 temp events generated 09/21/22 11:14:48.365
W0921 11:29:59.640]     STEP: Wait for 0 total events generated 09/21/22 11:14:48.374
W0921 11:29:59.640]     STEP: Make sure only 0 total events generated 09/21/22 11:14:48.381
W0921 11:29:59.640]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:53.382
W0921 11:29:59.641]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:53.384
W0921 11:29:59.641]     STEP: should generate event for old log within lookback duration 09/21/22 11:14:58.384
W0921 11:29:59.641]     STEP: Inject 3 logs: "temporary error" 09/21/22 11:14:58.384
W0921 11:29:59.641]     STEP: Wait for 3 temp events generated 09/21/22 11:14:58.385
W0921 11:29:59.641]     STEP: Wait for 3 total events generated 09/21/22 11:14:59.403
W0921 11:29:59.641]     STEP: Make sure only 3 total events generated 09/21/22 11:14:59.412
W0921 11:29:59.642]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:04.412
W0921 11:29:59.642]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:04.418
W0921 11:29:59.642]     STEP: should change node condition for old log within lookback duration 09/21/22 11:15:09.418
W0921 11:29:59.642]     STEP: Inject 1 logs: "permanent error 1" 09/21/22 11:15:09.418
W0921 11:29:59.642]     STEP: Wait for 3 temp events generated 09/21/22 11:15:09.419
W0921 11:29:59.642]     STEP: Wait for 4 total events generated 09/21/22 11:15:09.428
W0921 11:29:59.643]     STEP: Make sure only 4 total events generated 09/21/22 11:15:10.454
W0921 11:29:59.643]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:15.454
W0921 11:29:59.643]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:15.457
W0921 11:29:59.643]     STEP: should generate event for new log 09/21/22 11:15:20.457
W0921 11:29:59.643]     STEP: Inject 3 logs: "temporary error" 09/21/22 11:15:20.457
W0921 11:29:59.643]     STEP: Wait for 6 temp events generated 09/21/22 11:15:20.458
W0921 11:29:59.644]     STEP: Wait for 7 total events generated 09/21/22 11:15:21.475
W0921 11:29:59.644]     STEP: Make sure only 7 total events generated 09/21/22 11:15:21.484
W0921 11:29:59.644]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:26.484
W0921 11:29:59.644]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:26.487
W0921 11:29:59.644]     STEP: should not update node condition with the same reason 09/21/22 11:15:31.487
W0921 11:29:59.644]     STEP: Inject 1 logs: "permanent error 1different message" 09/21/22 11:15:31.488
W0921 11:29:59.645]     STEP: Wait for 6 temp events generated 09/21/22 11:15:31.488
W0921 11:29:59.645]     STEP: Wait for 7 total events generated 09/21/22 11:15:31.497
W0921 11:29:59.645]     STEP: Make sure only 7 total events generated 09/21/22 11:15:31.503
W0921 11:29:59.645]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:36.503
W0921 11:29:59.645]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:36.506
W0921 11:29:59.645]     STEP: should change node condition for new log 09/21/22 11:15:41.506
W0921 11:29:59.646]     STEP: Inject 1 logs: "permanent error 2" 09/21/22 11:15:41.506
W0921 11:29:59.646]     STEP: Wait for 6 temp events generated 09/21/22 11:15:41.507
W0921 11:29:59.646]     STEP: Wait for 8 total events generated 09/21/22 11:15:41.515
W0921 11:29:59.646]     STEP: Make sure only 8 total events generated 09/21/22 11:15:42.534
W0921 11:29:59.646]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:47.534
W0921 11:29:59.646]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:47.536
W0921 11:29:59.646]     [AfterEach] SystemLogMonitor
... skipping 35 lines ...
W0921 11:29:59.652] 
W0921 11:29:59.652] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.653] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.653] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.653] 1 loaded units listed.
W0921 11:29:59.653] , kubelet-20220921T102832
W0921 11:29:59.653] W0921 11:15:52.880260    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:40208->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.653] STEP: Starting the kubelet 09/21/22 11:15:52.912
W0921 11:29:59.654] W0921 11:15:52.969134    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.654] Sep 21 11:15:57.999: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.654] Sep 21 11:15:59.001: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.654] Sep 21 11:16:00.004: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.655] Sep 21 11:16:01.007: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.655] Sep 21 11:16:02.009: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.655] Sep 21 11:16:03.012: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 110 lines ...
W0921 11:29:59.675] 
W0921 11:29:59.675] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.675] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.676] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.676] 1 loaded units listed.
W0921 11:29:59.676] , kubelet-20220921T102832
W0921 11:29:59.676] W0921 11:17:18.256946    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:37732->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.676] STEP: Starting the kubelet 09/21/22 11:17:18.27
W0921 11:29:59.676] W0921 11:17:18.328166    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.677] Sep 21 11:17:23.331: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.677] Sep 21 11:17:24.335: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.677] Sep 21 11:17:25.338: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.678] Sep 21 11:17:26.341: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.678] Sep 21 11:17:27.344: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.678] Sep 21 11:17:28.347: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 32 lines ...
W0921 11:29:59.684] 
W0921 11:29:59.684]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.684]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.684]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.684]     1 loaded units listed.
W0921 11:29:59.685]     , kubelet-20220921T102832
W0921 11:29:59.685]     W0921 11:15:52.880260    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:40208->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.685]     STEP: Starting the kubelet 09/21/22 11:15:52.912
W0921 11:29:59.685]     W0921 11:15:52.969134    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.686]     Sep 21 11:15:57.999: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.686]     Sep 21 11:15:59.001: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.686]     Sep 21 11:16:00.004: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.686]     Sep 21 11:16:01.007: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.687]     Sep 21 11:16:02.009: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.687]     Sep 21 11:16:03.012: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 110 lines ...
W0921 11:29:59.706] 
W0921 11:29:59.707]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.707]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.707]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.707]     1 loaded units listed.
W0921 11:29:59.707]     , kubelet-20220921T102832
W0921 11:29:59.707]     W0921 11:17:18.256946    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:37732->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.708]     STEP: Starting the kubelet 09/21/22 11:17:18.27
W0921 11:29:59.708]     W0921 11:17:18.328166    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.708]     Sep 21 11:17:23.331: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.708]     Sep 21 11:17:24.335: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.709]     Sep 21 11:17:25.338: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.709]     Sep 21 11:17:26.341: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.709]     Sep 21 11:17:27.344: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.710]     Sep 21 11:17:28.347: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 78 lines ...
W0921 11:29:59.724] 
W0921 11:29:59.724] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.724] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.724] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.725] 1 loaded units listed.
W0921 11:29:59.725] , kubelet-20220921T102832
W0921 11:29:59.725] W0921 11:17:33.643962    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:40892->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.725] STEP: Starting the kubelet 09/21/22 11:17:33.655
W0921 11:29:59.726] W0921 11:17:33.706753    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.726] Sep 21 11:17:38.711: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.726] Sep 21 11:17:39.714: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.727] Sep 21 11:17:40.717: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.727] Sep 21 11:17:41.720: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.727] Sep 21 11:17:42.723: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.728] Sep 21 11:17:43.731: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 77 lines ...
W0921 11:29:59.766] 
W0921 11:29:59.766] LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.766] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.767] SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.767] 1 loaded units listed.
W0921 11:29:59.767] , kubelet-20220921T102832
W0921 11:29:59.767] W0921 11:18:12.035027    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:55544->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.767] STEP: Starting the kubelet 09/21/22 11:18:12.048
W0921 11:29:59.768] W0921 11:18:12.106008    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.768] Sep 21 11:18:17.109: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.768] Sep 21 11:18:18.112: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.768] Sep 21 11:18:19.114: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.769] Sep 21 11:18:20.117: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.769] Sep 21 11:18:21.120: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.769] Sep 21 11:18:22.123: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 29 lines ...
W0921 11:29:59.775] 
W0921 11:29:59.775]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.775]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.775]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.775]     1 loaded units listed.
W0921 11:29:59.775]     , kubelet-20220921T102832
W0921 11:29:59.776]     W0921 11:17:33.643962    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:40892->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.776]     STEP: Starting the kubelet 09/21/22 11:17:33.655
W0921 11:29:59.776]     W0921 11:17:33.706753    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.776]     Sep 21 11:17:38.711: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.777]     Sep 21 11:17:39.714: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.777]     Sep 21 11:17:40.717: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.777]     Sep 21 11:17:41.720: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.777]     Sep 21 11:17:42.723: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.778]     Sep 21 11:17:43.731: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 77 lines ...
W0921 11:29:59.816] 
W0921 11:29:59.816]     LOAD   = Reflects whether the unit definition was properly loaded.
W0921 11:29:59.817]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0921 11:29:59.817]     SUB    = The low-level unit activation state, values depend on unit type.
W0921 11:29:59.817]     1 loaded units listed.
W0921 11:29:59.817]     , kubelet-20220921T102832
W0921 11:29:59.817]     W0921 11:18:12.035027    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:55544->127.0.0.1:10248: read: connection reset by peer
W0921 11:29:59.817]     STEP: Starting the kubelet 09/21/22 11:18:12.048
W0921 11:29:59.818]     W0921 11:18:12.106008    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0921 11:29:59.818]     Sep 21 11:18:17.109: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.818]     Sep 21 11:18:18.112: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.819]     Sep 21 11:18:19.114: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.819]     Sep 21 11:18:20.117: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.819]     Sep 21 11:18:21.120: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
W0921 11:29:59.820]     Sep 21 11:18:22.123: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 12494 lines ...
I0921 11:30:20.326] 
I0921 11:30:20.326] Sep 21 10:54:37.361: INFO: Dumping perf data for test "resource_10" to "/tmp/node-e2e-20220921T102832/results/performance-memory-fedora-resource_10.json".
I0921 11:30:20.326] Sep 21 10:54:37.362: INFO: Dumping perf data for test "resource_10" to "/tmp/node-e2e-20220921T102832/results/performance-cpu-fedora-resource_10.json".
I0921 11:30:20.326] [AfterEach] [sig-node] Resource-usage [Serial] [Slow]
I0921 11:30:20.326]   test/e2e_node/resource_usage_test.go:62
I0921 11:30:20.326] W0921 10:54:37.363535    2625 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
I0921 11:30:20.327] Sep 21 10:54:37.382: INFO: runtime operation error metrics:
I0921 11:30:20.327] node "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d" runtime operation error rate:
I0921 11:30:20.327] 
I0921 11:30:20.327] 
I0921 11:30:20.327] [DeferCleanup] [sig-node] Resource-usage [Serial] [Slow]
I0921 11:30:20.327]   dump namespaces | framework.go:173
I0921 11:30:20.327] [DeferCleanup] [sig-node] Resource-usage [Serial] [Slow]
I0921 11:30:20.327]   tear down framework | framework.go:170
... skipping 209 lines ...
I0921 11:30:20.372] 
I0921 11:30:20.372]     Sep 21 10:54:37.361: INFO: Dumping perf data for test "resource_10" to "/tmp/node-e2e-20220921T102832/results/performance-memory-fedora-resource_10.json".
I0921 11:30:20.372]     Sep 21 10:54:37.362: INFO: Dumping perf data for test "resource_10" to "/tmp/node-e2e-20220921T102832/results/performance-cpu-fedora-resource_10.json".
I0921 11:30:20.372]     [AfterEach] [sig-node] Resource-usage [Serial] [Slow]
I0921 11:30:20.372]       test/e2e_node/resource_usage_test.go:62
I0921 11:30:20.373]     W0921 10:54:37.363535    2625 metrics_grabber.go:110] Can't find any pods in namespace kube-system to grab metrics from
I0921 11:30:20.373]     Sep 21 10:54:37.382: INFO: runtime operation error metrics:
I0921 11:30:20.373]     node "n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d" runtime operation error rate:
I0921 11:30:20.373] 
I0921 11:30:20.373] 
I0921 11:30:20.373]     [DeferCleanup] [sig-node] Resource-usage [Serial] [Slow]
I0921 11:30:20.373]       dump namespaces | framework.go:173
I0921 11:30:20.374]     [DeferCleanup] [sig-node] Resource-usage [Serial] [Slow]
I0921 11:30:20.374]       tear down framework | framework.go:170
... skipping 635 lines ...
I0921 11:30:20.499] 
I0921 11:30:20.499] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.500] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.500] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.500] 1 loaded units listed.
I0921 11:30:20.500] , kubelet-20220921T102832
I0921 11:30:20.500] W0921 10:57:09.645173    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.500] STEP: Starting the kubelet 09/21/22 10:57:09.651
I0921 11:30:20.501] W0921 10:57:09.686129    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.501] Sep 21 10:57:14.689: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.501] Sep 21 10:57:15.692: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.502] Sep 21 10:57:16.695: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.502] Sep 21 10:57:17.698: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.502] Sep 21 10:57:18.701: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.503] Sep 21 10:57:19.704: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 31 lines ...
I0921 11:30:20.508] 
I0921 11:30:20.508]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.508]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.509]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.509]     1 loaded units listed.
I0921 11:30:20.509]     , kubelet-20220921T102832
I0921 11:30:20.509]     W0921 10:57:09.645173    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.509]     STEP: Starting the kubelet 09/21/22 10:57:09.651
I0921 11:30:20.510]     W0921 10:57:09.686129    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.510]     Sep 21 10:57:14.689: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.510]     Sep 21 10:57:15.692: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.511]     Sep 21 10:57:16.695: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.511]     Sep 21 10:57:17.698: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.511]     Sep 21 10:57:18.701: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.512]     Sep 21 10:57:19.704: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 18 lines ...
I0921 11:30:20.515] STEP: Creating a kubernetes client 09/21/22 10:57:20.711
I0921 11:30:20.515] STEP: Building a namespace api object, basename downward-api 09/21/22 10:57:20.711
I0921 11:30:20.515] Sep 21 10:57:20.715: INFO: Skipping waiting for service account
I0921 11:30:20.515] [It] should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
I0921 11:30:20.515]   test/e2e/common/node/downwardapi.go:293
I0921 11:30:20.516] STEP: Creating a pod to test downward api env vars 09/21/22 10:57:20.715
I0921 11:30:20.516] Sep 21 10:57:20.724: INFO: Waiting up to 5m0s for pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2" in namespace "downward-api-111" to be "Succeeded or Failed"
I0921 11:30:20.516] Sep 21 10:57:20.730: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.79054ms
I0921 11:30:20.516] Sep 21 10:57:22.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008697164s
I0921 11:30:20.516] Sep 21 10:57:24.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008304233s
I0921 11:30:20.517] Sep 21 10:57:26.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.008236988s
I0921 11:30:20.517] STEP: Saw pod success 09/21/22 10:57:26.733
I0921 11:30:20.517] Sep 21 10:57:26.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2" satisfied condition "Succeeded or Failed"
I0921 11:30:20.517] Sep 21 10:57:26.734: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 container dapi-container: <nil>
I0921 11:30:20.518] STEP: delete the pod 09/21/22 10:57:26.743
I0921 11:30:20.518] Sep 21 10:57:26.746: INFO: Waiting for pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 to disappear
I0921 11:30:20.518] Sep 21 10:57:26.747: INFO: Pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 no longer exists
I0921 11:30:20.518] [DeferCleanup] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages]
I0921 11:30:20.518]   dump namespaces | framework.go:173
... skipping 16 lines ...
I0921 11:30:20.521]     STEP: Creating a kubernetes client 09/21/22 10:57:20.711
I0921 11:30:20.521]     STEP: Building a namespace api object, basename downward-api 09/21/22 10:57:20.711
I0921 11:30:20.521]     Sep 21 10:57:20.715: INFO: Skipping waiting for service account
I0921 11:30:20.521]     [It] should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
I0921 11:30:20.521]       test/e2e/common/node/downwardapi.go:293
I0921 11:30:20.522]     STEP: Creating a pod to test downward api env vars 09/21/22 10:57:20.715
I0921 11:30:20.522]     Sep 21 10:57:20.724: INFO: Waiting up to 5m0s for pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2" in namespace "downward-api-111" to be "Succeeded or Failed"
I0921 11:30:20.522]     Sep 21 10:57:20.730: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.79054ms
I0921 11:30:20.522]     Sep 21 10:57:22.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008697164s
I0921 11:30:20.523]     Sep 21 10:57:24.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008304233s
I0921 11:30:20.523]     Sep 21 10:57:26.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.008236988s
I0921 11:30:20.523]     STEP: Saw pod success 09/21/22 10:57:26.733
I0921 11:30:20.523]     Sep 21 10:57:26.733: INFO: Pod "downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2" satisfied condition "Succeeded or Failed"
I0921 11:30:20.524]     Sep 21 10:57:26.734: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 container dapi-container: <nil>
I0921 11:30:20.524]     STEP: delete the pod 09/21/22 10:57:26.743
I0921 11:30:20.524]     Sep 21 10:57:26.746: INFO: Waiting for pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 to disappear
I0921 11:30:20.524]     Sep 21 10:57:26.747: INFO: Pod downward-api-0ab6a0fe-4099-4280-9fc4-6eeddbaebad2 no longer exists
I0921 11:30:20.524]     [DeferCleanup] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages]
I0921 11:30:20.524]       dump namespaces | framework.go:173
... skipping 216 lines ...
I0921 11:30:20.563] 
I0921 11:30:20.563] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.564] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.564] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.564] 1 loaded units listed.
I0921 11:30:20.564] , kubelet-20220921T102832
I0921 11:30:20.564] W0921 10:58:04.911654    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:34284->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.564] STEP: Starting the kubelet 09/21/22 10:58:04.918
I0921 11:30:20.565] W0921 10:58:04.953863    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.565] Sep 21 10:58:09.957: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.565] Sep 21 10:58:10.960: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.565] Sep 21 10:58:11.962: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.566] Sep 21 10:58:12.966: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.566] Sep 21 10:58:13.969: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.566] Sep 21 10:58:14.971: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.566] [It] should set pids.max for Pod
I0921 11:30:20.566]   test/e2e_node/pids_test.go:90
I0921 11:30:20.566] STEP: by creating a G pod 09/21/22 10:58:15.974
I0921 11:30:20.567] STEP: checking if the expected pids settings were applied 09/21/22 10:58:15.983
I0921 11:30:20.567] Sep 21 10:58:15.983: INFO: Pod to run command: expected=1024; actual=$(cat /tmp/pids//kubepods.slice/kubepods-pod8956d093_cc6b_4a26_ac39_e5b4ec75abd9.slice/pids.max); if [ "$expected" -ne "$actual" ]; then exit 1; fi; 
I0921 11:30:20.567] Sep 21 10:58:15.990: INFO: Waiting up to 5m0s for pod "pod67a20102-923e-4072-b567-94158fdd8549" in namespace "pids-limit-test-4001" to be "Succeeded or Failed"
I0921 11:30:20.567] Sep 21 10:58:15.996: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 5.673992ms
I0921 11:30:20.568] Sep 21 10:58:18.004: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013958674s
I0921 11:30:20.568] Sep 21 10:58:19.999: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009319846s
I0921 11:30:20.568] Sep 21 10:58:21.998: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.008089329s
I0921 11:30:20.568] STEP: Saw pod success 09/21/22 10:58:21.998
I0921 11:30:20.568] Sep 21 10:58:21.998: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549" satisfied condition "Succeeded or Failed"
I0921 11:30:20.568] [AfterEach] With config updated with pids limits
I0921 11:30:20.569]   test/e2e_node/util.go:181
I0921 11:30:20.569] STEP: Stopping the kubelet 09/21/22 10:58:21.998
I0921 11:30:20.569] Sep 21 10:58:22.033: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
I0921 11:30:20.570]   kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
I0921 11:30:20.570] 
I0921 11:30:20.570] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.570] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.570] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.570] 1 loaded units listed.
I0921 11:30:20.570] , kubelet-20220921T102832
I0921 11:30:20.571] W0921 10:58:22.097319    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:56886->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.571] STEP: Starting the kubelet 09/21/22 10:58:22.106
I0921 11:30:20.571] W0921 10:58:22.144390    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.571] Sep 21 10:58:27.147: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.572] Sep 21 10:58:28.150: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.572] Sep 21 10:58:29.153: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.572] Sep 21 10:58:30.155: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.572] Sep 21 10:58:31.158: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.573] Sep 21 10:58:32.162: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 26 lines ...
I0921 11:30:20.577] 
I0921 11:30:20.577]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.577]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.577]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.578]     1 loaded units listed.
I0921 11:30:20.578]     , kubelet-20220921T102832
I0921 11:30:20.578]     W0921 10:58:04.911654    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:34284->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.578]     STEP: Starting the kubelet 09/21/22 10:58:04.918
I0921 11:30:20.578]     W0921 10:58:04.953863    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.579]     Sep 21 10:58:09.957: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.579]     Sep 21 10:58:10.960: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.579]     Sep 21 10:58:11.962: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.580]     Sep 21 10:58:12.966: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.580]     Sep 21 10:58:13.969: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.580]     Sep 21 10:58:14.971: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.580]     [It] should set pids.max for Pod
I0921 11:30:20.580]       test/e2e_node/pids_test.go:90
I0921 11:30:20.581]     STEP: by creating a G pod 09/21/22 10:58:15.974
I0921 11:30:20.581]     STEP: checking if the expected pids settings were applied 09/21/22 10:58:15.983
I0921 11:30:20.581]     Sep 21 10:58:15.983: INFO: Pod to run command: expected=1024; actual=$(cat /tmp/pids//kubepods.slice/kubepods-pod8956d093_cc6b_4a26_ac39_e5b4ec75abd9.slice/pids.max); if [ "$expected" -ne "$actual" ]; then exit 1; fi; 
I0921 11:30:20.581]     Sep 21 10:58:15.990: INFO: Waiting up to 5m0s for pod "pod67a20102-923e-4072-b567-94158fdd8549" in namespace "pids-limit-test-4001" to be "Succeeded or Failed"
I0921 11:30:20.581]     Sep 21 10:58:15.996: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 5.673992ms
I0921 11:30:20.582]     Sep 21 10:58:18.004: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013958674s
I0921 11:30:20.582]     Sep 21 10:58:19.999: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009319846s
I0921 11:30:20.582]     Sep 21 10:58:21.998: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.008089329s
I0921 11:30:20.582]     STEP: Saw pod success 09/21/22 10:58:21.998
I0921 11:30:20.583]     Sep 21 10:58:21.998: INFO: Pod "pod67a20102-923e-4072-b567-94158fdd8549" satisfied condition "Succeeded or Failed"
I0921 11:30:20.583]     [AfterEach] With config updated with pids limits
I0921 11:30:20.583]       test/e2e_node/util.go:181
I0921 11:30:20.583]     STEP: Stopping the kubelet 09/21/22 10:58:21.998
I0921 11:30:20.583]     Sep 21 10:58:22.033: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
I0921 11:30:20.584]       kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
I0921 11:30:20.584] 
I0921 11:30:20.584]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.584]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.584]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.584]     1 loaded units listed.
I0921 11:30:20.585]     , kubelet-20220921T102832
I0921 11:30:20.585]     W0921 10:58:22.097319    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:56886->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.585]     STEP: Starting the kubelet 09/21/22 10:58:22.106
I0921 11:30:20.585]     W0921 10:58:22.144390    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.586]     Sep 21 10:58:27.147: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.586]     Sep 21 10:58:28.150: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.586]     Sep 21 10:58:29.153: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.586]     Sep 21 10:58:30.155: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.587]     Sep 21 10:58:31.158: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.587]     Sep 21 10:58:32.162: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 79 lines ...
I0921 11:30:20.602] 
I0921 11:30:20.602] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.602] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.602] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.602] 1 loaded units listed.
I0921 11:30:20.602] , kubelet-20220921T102832
I0921 11:30:20.603] W0921 10:58:33.305530    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:60742->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.603] STEP: Starting the kubelet 09/21/22 10:58:33.314
I0921 11:30:20.603] W0921 10:58:33.347685    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.604] Sep 21 10:58:38.354: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.604] Sep 21 10:58:39.356: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.604] Sep 21 10:58:40.359: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.605] Sep 21 10:58:41.361: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.605] Sep 21 10:58:42.364: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.605] Sep 21 10:58:43.367: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 24 lines ...
I0921 11:30:20.611] STEP: Waiting for evictions to occur 09/21/22 10:59:18.446
I0921 11:30:20.611] Sep 21 10:59:18.459: INFO: Kubelet Metrics: []
I0921 11:30:20.611] Sep 21 10:59:18.469: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.611] Sep 21 10:59:18.469: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.611] Sep 21 10:59:18.471: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.612] Sep 21 10:59:18.471: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.612] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:18.472
I0921 11:30:20.612] Sep 21 10:59:20.485: INFO: Kubelet Metrics: []
I0921 11:30:20.612] Sep 21 10:59:20.496: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.612] Sep 21 10:59:20.496: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.613] Sep 21 10:59:20.498: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.613] Sep 21 10:59:20.498: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.613] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:20.498
I0921 11:30:20.613] Sep 21 10:59:22.523: INFO: Kubelet Metrics: []
I0921 11:30:20.613] Sep 21 10:59:22.535: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.613] Sep 21 10:59:22.535: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.614] Sep 21 10:59:22.537: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.614] Sep 21 10:59:22.537: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.614] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:22.537
I0921 11:30:20.614] Sep 21 10:59:24.548: INFO: Kubelet Metrics: []
I0921 11:30:20.614] Sep 21 10:59:24.559: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.615] Sep 21 10:59:24.560: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.615] Sep 21 10:59:24.562: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.615] Sep 21 10:59:24.562: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.615] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:24.562
I0921 11:30:20.615] Sep 21 10:59:26.574: INFO: Kubelet Metrics: []
I0921 11:30:20.616] Sep 21 10:59:26.592: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.616] Sep 21 10:59:26.592: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.616] Sep 21 10:59:26.595: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.616] Sep 21 10:59:26.595: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.616] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:26.595
I0921 11:30:20.616] Sep 21 10:59:28.606: INFO: Kubelet Metrics: []
I0921 11:30:20.617] Sep 21 10:59:28.619: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.617] Sep 21 10:59:28.619: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.617] Sep 21 10:59:28.619: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.617] Sep 21 10:59:28.619: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.617] Sep 21 10:59:28.619: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.618] Sep 21 10:59:28.621: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.618] Sep 21 10:59:28.621: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.618] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:28.621
I0921 11:30:20.618] Sep 21 10:59:30.635: INFO: Kubelet Metrics: []
I0921 11:30:20.618] Sep 21 10:59:30.661: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.619] Sep 21 10:59:30.661: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.619] Sep 21 10:59:30.661: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.619] Sep 21 10:59:30.661: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.619] Sep 21 10:59:30.661: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.619] Sep 21 10:59:30.664: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.619] Sep 21 10:59:30.664: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.620] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:30.664
I0921 11:30:20.620] Sep 21 10:59:32.677: INFO: Kubelet Metrics: []
I0921 11:30:20.620] Sep 21 10:59:32.690: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.620] Sep 21 10:59:32.690: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.620] Sep 21 10:59:32.690: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.621] Sep 21 10:59:32.690: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.621] Sep 21 10:59:32.690: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.621] Sep 21 10:59:32.692: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.621] Sep 21 10:59:32.692: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.621] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:32.692
I0921 11:30:20.621] Sep 21 10:59:34.704: INFO: Kubelet Metrics: []
I0921 11:30:20.622] Sep 21 10:59:34.717: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.622] Sep 21 10:59:34.717: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.622] Sep 21 10:59:34.717: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.622] Sep 21 10:59:34.717: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.622] Sep 21 10:59:34.717: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.623] Sep 21 10:59:34.717: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.623] Sep 21 10:59:34.717: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.623] Sep 21 10:59:34.720: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.623] Sep 21 10:59:34.720: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.623] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:34.72
I0921 11:30:20.623] Sep 21 10:59:36.738: INFO: Kubelet Metrics: []
I0921 11:30:20.624] Sep 21 10:59:36.756: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.624] Sep 21 10:59:36.756: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.624] Sep 21 10:59:36.756: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.624] Sep 21 10:59:36.756: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.624] Sep 21 10:59:36.756: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.625] Sep 21 10:59:36.756: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.625] Sep 21 10:59:36.756: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.625] Sep 21 10:59:36.759: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.625] Sep 21 10:59:36.760: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.625] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:36.76
I0921 11:30:20.625] Sep 21 10:59:38.774: INFO: Kubelet Metrics: []
I0921 11:30:20.626] Sep 21 10:59:38.798: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.626] Sep 21 10:59:38.798: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.626] Sep 21 10:59:38.798: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.626] Sep 21 10:59:38.798: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.626] Sep 21 10:59:38.798: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.627] Sep 21 10:59:38.798: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.627] Sep 21 10:59:38.798: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.627] Sep 21 10:59:38.800: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.627] Sep 21 10:59:38.800: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.627] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:38.8
I0921 11:30:20.627] Sep 21 10:59:40.811: INFO: Kubelet Metrics: []
I0921 11:30:20.628] Sep 21 10:59:40.823: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.628] Sep 21 10:59:40.823: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.628] Sep 21 10:59:40.823: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.628] Sep 21 10:59:40.823: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.628] Sep 21 10:59:40.823: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.628] Sep 21 10:59:40.823: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.629] Sep 21 10:59:40.823: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.629] Sep 21 10:59:40.825: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.629] Sep 21 10:59:40.825: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.629] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:40.826
I0921 11:30:20.629] Sep 21 10:59:42.839: INFO: Kubelet Metrics: []
I0921 11:30:20.630] Sep 21 10:59:42.852: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.630] Sep 21 10:59:42.852: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.630] Sep 21 10:59:42.852: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.630] Sep 21 10:59:42.852: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.630] Sep 21 10:59:42.852: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.630] Sep 21 10:59:42.852: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.631] Sep 21 10:59:42.852: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.631] Sep 21 10:59:42.854: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.631] Sep 21 10:59:42.854: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.631] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:42.854
I0921 11:30:20.631] Sep 21 10:59:44.867: INFO: Kubelet Metrics: []
I0921 11:30:20.632] Sep 21 10:59:44.878: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.632] Sep 21 10:59:44.878: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.632] Sep 21 10:59:44.878: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.632] Sep 21 10:59:44.878: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.632] Sep 21 10:59:44.878: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.633] Sep 21 10:59:44.878: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.633] Sep 21 10:59:44.878: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.633] Sep 21 10:59:44.880: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.633] Sep 21 10:59:44.880: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.633] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:44.88
I0921 11:30:20.633] Sep 21 10:59:46.892: INFO: Kubelet Metrics: []
I0921 11:30:20.634] Sep 21 10:59:46.910: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.634] Sep 21 10:59:46.910: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.634] Sep 21 10:59:46.910: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.634] Sep 21 10:59:46.910: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.634] Sep 21 10:59:46.910: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.635] Sep 21 10:59:46.910: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.635] Sep 21 10:59:46.910: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.635] Sep 21 10:59:46.913: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.635] Sep 21 10:59:46.913: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.635] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:46.913
I0921 11:30:20.636] Sep 21 10:59:48.925: INFO: Kubelet Metrics: []
I0921 11:30:20.636] Sep 21 10:59:48.938: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.636] Sep 21 10:59:48.939: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.636] Sep 21 10:59:48.939: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.636] Sep 21 10:59:48.939: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.636] Sep 21 10:59:48.939: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.637] Sep 21 10:59:48.939: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.637] Sep 21 10:59:48.939: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.637] Sep 21 10:59:48.941: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.637] Sep 21 10:59:48.941: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.637] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:48.941
I0921 11:30:20.637] Sep 21 10:59:50.959: INFO: Kubelet Metrics: []
I0921 11:30:20.638] Sep 21 10:59:50.976: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.638] Sep 21 10:59:50.976: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.638] Sep 21 10:59:50.976: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.638] Sep 21 10:59:50.976: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.638] Sep 21 10:59:50.976: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.638] Sep 21 10:59:50.976: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.638] Sep 21 10:59:50.976: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.639] Sep 21 10:59:50.979: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.639] Sep 21 10:59:50.979: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.639] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:50.979
I0921 11:30:20.639] Sep 21 10:59:52.991: INFO: Kubelet Metrics: []
I0921 11:30:20.639] Sep 21 10:59:53.003: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.640] Sep 21 10:59:53.003: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.640] Sep 21 10:59:53.003: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.640] Sep 21 10:59:53.003: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.640] Sep 21 10:59:53.003: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.640] Sep 21 10:59:53.003: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.640] Sep 21 10:59:53.003: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.641] Sep 21 10:59:53.005: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.641] Sep 21 10:59:53.005: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.641] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:53.005
I0921 11:30:20.641] Sep 21 10:59:55.019: INFO: Kubelet Metrics: []
I0921 11:30:20.641] Sep 21 10:59:55.031: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.641] Sep 21 10:59:55.031: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.641] Sep 21 10:59:55.031: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.642] Sep 21 10:59:55.031: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.642] Sep 21 10:59:55.031: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.642] Sep 21 10:59:55.031: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.642] Sep 21 10:59:55.031: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.642] Sep 21 10:59:55.033: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.642] Sep 21 10:59:55.033: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.643] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:55.033
I0921 11:30:20.643] Sep 21 10:59:57.056: INFO: Kubelet Metrics: []
I0921 11:30:20.643] Sep 21 10:59:57.073: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.643] Sep 21 10:59:57.074: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.643] Sep 21 10:59:57.074: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.644] Sep 21 10:59:57.074: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.644] Sep 21 10:59:57.074: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.644] Sep 21 10:59:57.074: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.644] Sep 21 10:59:57.074: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.644] Sep 21 10:59:57.076: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.644] Sep 21 10:59:57.076: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.645] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:57.076
I0921 11:30:20.645] Sep 21 10:59:59.087: INFO: Kubelet Metrics: []
I0921 11:30:20.645] Sep 21 10:59:59.098: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.645] Sep 21 10:59:59.098: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.645] Sep 21 10:59:59.098: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.646] Sep 21 10:59:59.098: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.646] Sep 21 10:59:59.098: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.646] Sep 21 10:59:59.098: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.646] Sep 21 10:59:59.098: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.646] Sep 21 10:59:59.101: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.646] Sep 21 10:59:59.101: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.647] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:59.101
I0921 11:30:20.647] Sep 21 11:00:01.114: INFO: Kubelet Metrics: []
I0921 11:30:20.647] Sep 21 11:00:01.126: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.647] Sep 21 11:00:01.126: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.647] Sep 21 11:00:01.126: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.648] Sep 21 11:00:01.126: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.648] Sep 21 11:00:01.126: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.648] Sep 21 11:00:01.126: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.648] Sep 21 11:00:01.126: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.648] Sep 21 11:00:01.129: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.649] Sep 21 11:00:01.129: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.649] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:01.129
I0921 11:30:20.649] Sep 21 11:00:03.143: INFO: Kubelet Metrics: []
I0921 11:30:20.649] Sep 21 11:00:03.155: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.649] Sep 21 11:00:03.155: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.649] Sep 21 11:00:03.155: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.650] Sep 21 11:00:03.155: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.650] Sep 21 11:00:03.155: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.650] Sep 21 11:00:03.155: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.650] Sep 21 11:00:03.155: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.650] Sep 21 11:00:03.157: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.651] Sep 21 11:00:03.157: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.651] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:03.157
I0921 11:30:20.651] Sep 21 11:00:05.175: INFO: Kubelet Metrics: []
I0921 11:30:20.651] Sep 21 11:00:05.194: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.651] Sep 21 11:00:05.194: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.651] Sep 21 11:00:05.194: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.652] Sep 21 11:00:05.194: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.652] Sep 21 11:00:05.194: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.652] Sep 21 11:00:05.194: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.652] Sep 21 11:00:05.194: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.652] Sep 21 11:00:05.197: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.653] Sep 21 11:00:05.197: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.653] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:05.197
I0921 11:30:20.653] Sep 21 11:00:07.218: INFO: Kubelet Metrics: []
I0921 11:30:20.653] Sep 21 11:00:07.238: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.653] Sep 21 11:00:07.238: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.653] Sep 21 11:00:07.238: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.654] Sep 21 11:00:07.238: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.654] Sep 21 11:00:07.238: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.654] Sep 21 11:00:07.241: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.654] Sep 21 11:00:07.241: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.654] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:07.241
I0921 11:30:20.655] STEP: making sure pressure from test has surfaced before continuing 09/21/22 11:00:07.241
I0921 11:30:20.655] STEP: Waiting for NodeCondition: NoPressure to no longer exist on the node 09/21/22 11:00:27.241
I0921 11:30:20.655] Sep 21 11:00:27.253: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.655] Sep 21 11:00:27.253: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.655] Sep 21 11:00:27.253: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.656] Sep 21 11:00:27.253: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
... skipping 3 lines ...
I0921 11:30:20.656] Sep 21 11:00:27.276: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.657] Sep 21 11:00:27.276: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.657] Sep 21 11:00:27.276: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.657] Sep 21 11:00:27.276: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.657] Sep 21 11:00:27.276: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.657] Sep 21 11:00:27.286: INFO: Kubelet Metrics: []
I0921 11:30:20.657] Sep 21 11:00:27.289: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.658] Sep 21 11:00:27.289: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.658] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:27.289
I0921 11:30:20.658] Sep 21 11:00:29.302: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.658] Sep 21 11:00:29.302: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.658] Sep 21 11:00:29.302: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.659] Sep 21 11:00:29.302: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.659] Sep 21 11:00:29.302: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.659] Sep 21 11:00:29.312: INFO: Kubelet Metrics: []
I0921 11:30:20.659] Sep 21 11:00:29.315: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.659] Sep 21 11:00:29.315: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.660] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:29.315
I0921 11:30:20.660] Sep 21 11:00:31.331: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.660] Sep 21 11:00:31.331: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.660] Sep 21 11:00:31.331: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.660] Sep 21 11:00:31.331: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.661] Sep 21 11:00:31.331: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.661] Sep 21 11:00:31.341: INFO: Kubelet Metrics: []
I0921 11:30:20.661] Sep 21 11:00:31.343: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.661] Sep 21 11:00:31.344: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.661] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:31.344
I0921 11:30:20.661] Sep 21 11:00:33.358: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.662] Sep 21 11:00:33.358: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.662] Sep 21 11:00:33.358: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.662] Sep 21 11:00:33.358: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.662] Sep 21 11:00:33.358: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.662] Sep 21 11:00:33.389: INFO: Kubelet Metrics: []
I0921 11:30:20.663] Sep 21 11:00:33.392: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.663] Sep 21 11:00:33.392: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.663] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:33.392
I0921 11:30:20.663] Sep 21 11:00:35.405: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.663] Sep 21 11:00:35.405: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.664] Sep 21 11:00:35.405: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.664] Sep 21 11:00:35.405: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.664] Sep 21 11:00:35.405: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.664] Sep 21 11:00:35.416: INFO: Kubelet Metrics: []
I0921 11:30:20.664] Sep 21 11:00:35.419: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.664] Sep 21 11:00:35.419: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.665] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:35.419
I0921 11:30:20.665] Sep 21 11:00:37.434: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.665] Sep 21 11:00:37.434: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.665] Sep 21 11:00:37.434: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.665] Sep 21 11:00:37.434: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.666] Sep 21 11:00:37.434: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.666] Sep 21 11:00:37.443: INFO: Kubelet Metrics: []
I0921 11:30:20.666] Sep 21 11:00:37.446: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.666] Sep 21 11:00:37.446: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.666] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:37.446
I0921 11:30:20.667] Sep 21 11:00:39.458: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.667] Sep 21 11:00:39.458: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.667] Sep 21 11:00:39.458: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.667] Sep 21 11:00:39.458: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.667] Sep 21 11:00:39.458: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.667] Sep 21 11:00:39.476: INFO: Kubelet Metrics: []
I0921 11:30:20.668] Sep 21 11:00:39.482: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.668] Sep 21 11:00:39.482: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.668] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:39.482
I0921 11:30:20.668] Sep 21 11:00:41.495: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.668] Sep 21 11:00:41.495: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.668] Sep 21 11:00:41.495: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.669] Sep 21 11:00:41.495: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.669] Sep 21 11:00:41.495: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.669] Sep 21 11:00:41.506: INFO: Kubelet Metrics: []
I0921 11:30:20.669] Sep 21 11:00:41.508: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.669] Sep 21 11:00:41.508: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.669] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:41.508
I0921 11:30:20.670] Sep 21 11:00:43.524: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.670] Sep 21 11:00:43.524: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.670] Sep 21 11:00:43.524: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.670] Sep 21 11:00:43.524: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.670] Sep 21 11:00:43.524: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.670] Sep 21 11:00:43.540: INFO: Kubelet Metrics: []
I0921 11:30:20.671] Sep 21 11:00:43.548: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.671] Sep 21 11:00:43.548: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.671] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:43.548
I0921 11:30:20.671] Sep 21 11:00:45.560: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.671] Sep 21 11:00:45.560: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.672] Sep 21 11:00:45.560: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.672] Sep 21 11:00:45.560: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.672] Sep 21 11:00:45.560: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.672] Sep 21 11:00:45.570: INFO: Kubelet Metrics: []
I0921 11:30:20.672] Sep 21 11:00:45.573: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.672] Sep 21 11:00:45.573: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.673] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:45.573
I0921 11:30:20.673] Sep 21 11:00:47.585: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.673] Sep 21 11:00:47.585: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.673] Sep 21 11:00:47.585: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.673] Sep 21 11:00:47.585: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.674] Sep 21 11:00:47.585: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.674] Sep 21 11:00:47.596: INFO: Kubelet Metrics: []
I0921 11:30:20.674] Sep 21 11:00:47.598: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.674] Sep 21 11:00:47.598: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.674] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:47.598
I0921 11:30:20.674] Sep 21 11:00:49.614: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.675] Sep 21 11:00:49.614: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.675] Sep 21 11:00:49.614: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.675] Sep 21 11:00:49.614: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.675] Sep 21 11:00:49.614: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.675] Sep 21 11:00:49.625: INFO: Kubelet Metrics: []
I0921 11:30:20.675] Sep 21 11:00:49.627: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.676] Sep 21 11:00:49.627: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.676] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:49.627
I0921 11:30:20.676] Sep 21 11:00:51.642: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.676] Sep 21 11:00:51.642: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.676] Sep 21 11:00:51.642: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.676] Sep 21 11:00:51.642: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.677] Sep 21 11:00:51.642: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.677] Sep 21 11:00:51.661: INFO: Kubelet Metrics: []
I0921 11:30:20.677] Sep 21 11:00:51.663: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.677] Sep 21 11:00:51.663: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.677] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:51.663
I0921 11:30:20.678] Sep 21 11:00:53.676: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.678] Sep 21 11:00:53.676: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.678] Sep 21 11:00:53.676: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.678] Sep 21 11:00:53.676: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.678] Sep 21 11:00:53.676: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.678] Sep 21 11:00:53.693: INFO: Kubelet Metrics: []
I0921 11:30:20.679] Sep 21 11:00:53.701: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.679] Sep 21 11:00:53.701: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.679] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:53.701
I0921 11:30:20.679] Sep 21 11:00:55.717: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.679] Sep 21 11:00:55.717: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.679] Sep 21 11:00:55.717: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.680] Sep 21 11:00:55.717: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.680] Sep 21 11:00:55.717: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.680] Sep 21 11:00:55.728: INFO: Kubelet Metrics: []
I0921 11:30:20.680] Sep 21 11:00:55.731: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.680] Sep 21 11:00:55.731: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.681] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:55.731
I0921 11:30:20.681] Sep 21 11:00:57.743: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.681] Sep 21 11:00:57.743: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.681] Sep 21 11:00:57.743: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.681] Sep 21 11:00:57.743: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.681] Sep 21 11:00:57.743: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.681] Sep 21 11:00:57.755: INFO: Kubelet Metrics: []
I0921 11:30:20.682] Sep 21 11:00:57.757: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.682] Sep 21 11:00:57.757: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.682] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:57.757
I0921 11:30:20.682] Sep 21 11:00:59.770: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.682] Sep 21 11:00:59.770: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.683] Sep 21 11:00:59.770: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.683] Sep 21 11:00:59.770: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.683] Sep 21 11:00:59.770: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.683] Sep 21 11:00:59.781: INFO: Kubelet Metrics: []
I0921 11:30:20.683] Sep 21 11:00:59.783: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.683] Sep 21 11:00:59.784: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.684] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:59.784
I0921 11:30:20.684] Sep 21 11:01:01.800: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.684] Sep 21 11:01:01.800: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.684] Sep 21 11:01:01.800: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.684] Sep 21 11:01:01.800: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.684] Sep 21 11:01:01.800: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.685] Sep 21 11:01:01.811: INFO: Kubelet Metrics: []
I0921 11:30:20.685] Sep 21 11:01:01.813: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.685] Sep 21 11:01:01.813: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.685] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:01.813
I0921 11:30:20.685] Sep 21 11:01:03.827: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.685] Sep 21 11:01:03.827: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.685] Sep 21 11:01:03.827: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.686] Sep 21 11:01:03.827: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.686] Sep 21 11:01:03.827: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.686] Sep 21 11:01:03.851: INFO: Kubelet Metrics: []
I0921 11:30:20.686] Sep 21 11:01:03.855: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.686] Sep 21 11:01:03.855: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.686] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:03.855
I0921 11:30:20.686] Sep 21 11:01:05.869: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.687] Sep 21 11:01:05.869: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.687] Sep 21 11:01:05.869: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.687] Sep 21 11:01:05.869: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.687] Sep 21 11:01:05.869: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.687] Sep 21 11:01:05.881: INFO: Kubelet Metrics: []
I0921 11:30:20.688] Sep 21 11:01:05.884: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.688] Sep 21 11:01:05.884: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.688] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:05.884
I0921 11:30:20.688] Sep 21 11:01:07.900: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.688] Sep 21 11:01:07.900: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.689] Sep 21 11:01:07.900: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.689] Sep 21 11:01:07.900: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.689] Sep 21 11:01:07.900: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.689] Sep 21 11:01:07.911: INFO: Kubelet Metrics: []
I0921 11:30:20.689] Sep 21 11:01:07.913: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.689] Sep 21 11:01:07.913: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.690] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:07.913
I0921 11:30:20.690] Sep 21 11:01:09.926: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.690] Sep 21 11:01:09.926: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.690] Sep 21 11:01:09.926: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.690] Sep 21 11:01:09.926: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.691] Sep 21 11:01:09.926: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.691] Sep 21 11:01:09.938: INFO: Kubelet Metrics: []
I0921 11:30:20.691] Sep 21 11:01:09.940: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.691] Sep 21 11:01:09.940: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.691] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:09.94
I0921 11:30:20.691] Sep 21 11:01:11.962: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.692] Sep 21 11:01:11.962: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.692] Sep 21 11:01:11.962: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.692] Sep 21 11:01:11.962: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.692] Sep 21 11:01:11.962: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.692] Sep 21 11:01:11.975: INFO: Kubelet Metrics: []
I0921 11:30:20.693] Sep 21 11:01:11.978: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.693] Sep 21 11:01:11.978: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.693] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:11.978
I0921 11:30:20.693] Sep 21 11:01:13.993: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.693] Sep 21 11:01:13.993: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.693] Sep 21 11:01:13.993: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.694] Sep 21 11:01:13.993: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.694] Sep 21 11:01:13.993: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.694] Sep 21 11:01:14.004: INFO: Kubelet Metrics: []
I0921 11:30:20.694] Sep 21 11:01:14.007: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.694] Sep 21 11:01:14.007: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.695] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:14.007
I0921 11:30:20.695] Sep 21 11:01:16.019: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.695] Sep 21 11:01:16.019: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.695] Sep 21 11:01:16.019: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.695] Sep 21 11:01:16.019: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.696] Sep 21 11:01:16.019: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.696] Sep 21 11:01:16.032: INFO: Kubelet Metrics: []
I0921 11:30:20.696] Sep 21 11:01:16.037: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.696] Sep 21 11:01:16.037: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.696] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:16.037
I0921 11:30:20.696] Sep 21 11:01:18.052: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.697] Sep 21 11:01:18.052: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.697] Sep 21 11:01:18.052: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.697] Sep 21 11:01:18.052: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.697] Sep 21 11:01:18.052: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.697] Sep 21 11:01:18.064: INFO: Kubelet Metrics: []
I0921 11:30:20.697] Sep 21 11:01:18.066: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.698] Sep 21 11:01:18.066: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.698] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:18.066
I0921 11:30:20.698] Sep 21 11:01:20.082: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.698] Sep 21 11:01:20.082: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.698] Sep 21 11:01:20.082: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.699] Sep 21 11:01:20.082: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.699] Sep 21 11:01:20.082: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.699] Sep 21 11:01:20.093: INFO: Kubelet Metrics: []
I0921 11:30:20.699] Sep 21 11:01:20.096: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.699] Sep 21 11:01:20.096: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.700] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:20.096
I0921 11:30:20.700] Sep 21 11:01:22.108: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.700] Sep 21 11:01:22.108: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.700] Sep 21 11:01:22.108: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.700] Sep 21 11:01:22.108: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.701] Sep 21 11:01:22.108: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.701] Sep 21 11:01:22.119: INFO: Kubelet Metrics: []
I0921 11:30:20.701] Sep 21 11:01:22.121: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.701] Sep 21 11:01:22.121: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.701] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:22.122
I0921 11:30:20.701] Sep 21 11:01:24.133: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.702] Sep 21 11:01:24.133: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.702] Sep 21 11:01:24.133: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.702] Sep 21 11:01:24.133: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.702] Sep 21 11:01:24.133: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.702] Sep 21 11:01:24.145: INFO: Kubelet Metrics: []
I0921 11:30:20.702] Sep 21 11:01:24.147: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.703] Sep 21 11:01:24.147: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.703] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:24.147
I0921 11:30:20.703] Sep 21 11:01:26.160: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.703] Sep 21 11:01:26.160: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.703] Sep 21 11:01:26.160: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.704] Sep 21 11:01:26.160: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.704] Sep 21 11:01:26.160: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.704] Sep 21 11:01:26.181: INFO: Kubelet Metrics: []
I0921 11:30:20.704] Sep 21 11:01:26.184: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.704] Sep 21 11:01:26.184: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.705] STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:26.184
I0921 11:30:20.705] STEP: checking for correctly formatted eviction events 09/21/22 11:01:27.264
I0921 11:30:20.705] [AfterEach] TOP-LEVEL
I0921 11:30:20.705]   test/e2e_node/eviction_test.go:592
I0921 11:30:20.705] STEP: deleting pods 09/21/22 11:01:27.267
I0921 11:30:20.705] STEP: deleting pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod 09/21/22 11:01:27.267
I0921 11:30:20.706] Sep 21 11:01:27.272: INFO: Waiting for pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod to disappear
... skipping 53 lines ...
I0921 11:30:20.717] 
I0921 11:30:20.718] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.718] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.718] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.718] 1 loaded units listed.
I0921 11:30:20.718] , kubelet-20220921T102832
I0921 11:30:20.718] W0921 11:02:03.445601    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:60670->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.719] STEP: Starting the kubelet 09/21/22 11:02:03.455
I0921 11:30:20.719] W0921 11:02:03.490070    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.719] Sep 21 11:02:08.493: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.719] Sep 21 11:02:09.495: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.720] Sep 21 11:02:10.499: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.720] Sep 21 11:02:11.502: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.720] Sep 21 11:02:12.505: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.721] Sep 21 11:02:13.508: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 34 lines ...
I0921 11:30:20.727] 
I0921 11:30:20.728]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.728]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.728]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.728]     1 loaded units listed.
I0921 11:30:20.728]     , kubelet-20220921T102832
I0921 11:30:20.728]     W0921 10:58:33.305530    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:60742->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.728]     STEP: Starting the kubelet 09/21/22 10:58:33.314
I0921 11:30:20.729]     W0921 10:58:33.347685    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.729]     Sep 21 10:58:38.354: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.729]     Sep 21 10:58:39.356: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.730]     Sep 21 10:58:40.359: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.730]     Sep 21 10:58:41.361: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.730]     Sep 21 10:58:42.364: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.731]     Sep 21 10:58:43.367: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 24 lines ...
I0921 11:30:20.736]     STEP: Waiting for evictions to occur 09/21/22 10:59:18.446
I0921 11:30:20.736]     Sep 21 10:59:18.459: INFO: Kubelet Metrics: []
I0921 11:30:20.736]     Sep 21 10:59:18.469: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.736]     Sep 21 10:59:18.469: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.737]     Sep 21 10:59:18.471: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.737]     Sep 21 10:59:18.471: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.737]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:18.472
I0921 11:30:20.737]     Sep 21 10:59:20.485: INFO: Kubelet Metrics: []
I0921 11:30:20.737]     Sep 21 10:59:20.496: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.738]     Sep 21 10:59:20.496: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.738]     Sep 21 10:59:20.498: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.738]     Sep 21 10:59:20.498: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.738]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:20.498
I0921 11:30:20.738]     Sep 21 10:59:22.523: INFO: Kubelet Metrics: []
I0921 11:30:20.739]     Sep 21 10:59:22.535: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.739]     Sep 21 10:59:22.535: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 15014227968
I0921 11:30:20.739]     Sep 21 10:59:22.537: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.739]     Sep 21 10:59:22.537: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.739]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:22.537
I0921 11:30:20.740]     Sep 21 10:59:24.548: INFO: Kubelet Metrics: []
I0921 11:30:20.740]     Sep 21 10:59:24.559: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.740]     Sep 21 10:59:24.560: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.740]     Sep 21 10:59:24.562: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.740]     Sep 21 10:59:24.562: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.741]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:24.562
I0921 11:30:20.741]     Sep 21 10:59:26.574: INFO: Kubelet Metrics: []
I0921 11:30:20.741]     Sep 21 10:59:26.592: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.741]     Sep 21 10:59:26.592: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.741]     Sep 21 10:59:26.595: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.741]     Sep 21 10:59:26.595: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.742]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:26.595
I0921 11:30:20.742]     Sep 21 10:59:28.606: INFO: Kubelet Metrics: []
I0921 11:30:20.742]     Sep 21 10:59:28.619: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.742]     Sep 21 10:59:28.619: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.742]     Sep 21 10:59:28.619: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.742]     Sep 21 10:59:28.619: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.743]     Sep 21 10:59:28.619: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.743]     Sep 21 10:59:28.621: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.743]     Sep 21 10:59:28.621: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.743]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:28.621
I0921 11:30:20.743]     Sep 21 10:59:30.635: INFO: Kubelet Metrics: []
I0921 11:30:20.744]     Sep 21 10:59:30.661: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.744]     Sep 21 10:59:30.661: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.744]     Sep 21 10:59:30.661: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.744]     Sep 21 10:59:30.661: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.744]     Sep 21 10:59:30.661: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.744]     Sep 21 10:59:30.664: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.745]     Sep 21 10:59:30.664: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.745]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:30.664
I0921 11:30:20.745]     Sep 21 10:59:32.677: INFO: Kubelet Metrics: []
I0921 11:30:20.745]     Sep 21 10:59:32.690: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.745]     Sep 21 10:59:32.690: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811463680
I0921 11:30:20.745]     Sep 21 10:59:32.690: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.746]     Sep 21 10:59:32.690: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.746]     Sep 21 10:59:32.690: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.746]     Sep 21 10:59:32.692: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.746]     Sep 21 10:59:32.692: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.746]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:32.692
I0921 11:30:20.746]     Sep 21 10:59:34.704: INFO: Kubelet Metrics: []
I0921 11:30:20.747]     Sep 21 10:59:34.717: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.747]     Sep 21 10:59:34.717: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.747]     Sep 21 10:59:34.717: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.747]     Sep 21 10:59:34.717: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.747]     Sep 21 10:59:34.717: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.747]     Sep 21 10:59:34.717: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.747]     Sep 21 10:59:34.717: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.748]     Sep 21 10:59:34.720: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.748]     Sep 21 10:59:34.720: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.748]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:34.72
I0921 11:30:20.748]     Sep 21 10:59:36.738: INFO: Kubelet Metrics: []
I0921 11:30:20.748]     Sep 21 10:59:36.756: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.748]     Sep 21 10:59:36.756: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.749]     Sep 21 10:59:36.756: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.749]     Sep 21 10:59:36.756: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.749]     Sep 21 10:59:36.756: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.749]     Sep 21 10:59:36.756: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.749]     Sep 21 10:59:36.756: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.749]     Sep 21 10:59:36.759: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.750]     Sep 21 10:59:36.760: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.750]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:36.76
I0921 11:30:20.750]     Sep 21 10:59:38.774: INFO: Kubelet Metrics: []
I0921 11:30:20.750]     Sep 21 10:59:38.798: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.750]     Sep 21 10:59:38.798: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.750]     Sep 21 10:59:38.798: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.751]     Sep 21 10:59:38.798: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.751]     Sep 21 10:59:38.798: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.751]     Sep 21 10:59:38.798: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.751]     Sep 21 10:59:38.798: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.751]     Sep 21 10:59:38.800: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.751]     Sep 21 10:59:38.800: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.752]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:38.8
I0921 11:30:20.752]     Sep 21 10:59:40.811: INFO: Kubelet Metrics: []
I0921 11:30:20.752]     Sep 21 10:59:40.823: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.752]     Sep 21 10:59:40.823: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.752]     Sep 21 10:59:40.823: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.753]     Sep 21 10:59:40.823: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.753]     Sep 21 10:59:40.823: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.753]     Sep 21 10:59:40.823: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.753]     Sep 21 10:59:40.823: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.753]     Sep 21 10:59:40.825: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.754]     Sep 21 10:59:40.825: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.754]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:40.826
I0921 11:30:20.754]     Sep 21 10:59:42.839: INFO: Kubelet Metrics: []
I0921 11:30:20.754]     Sep 21 10:59:42.852: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.754]     Sep 21 10:59:42.852: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.755]     Sep 21 10:59:42.852: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.755]     Sep 21 10:59:42.852: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.755]     Sep 21 10:59:42.852: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.755]     Sep 21 10:59:42.852: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.755]     Sep 21 10:59:42.852: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.756]     Sep 21 10:59:42.854: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.756]     Sep 21 10:59:42.854: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.756]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:42.854
I0921 11:30:20.756]     Sep 21 10:59:44.867: INFO: Kubelet Metrics: []
I0921 11:30:20.756]     Sep 21 10:59:44.878: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.757]     Sep 21 10:59:44.878: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.757]     Sep 21 10:59:44.878: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.757]     Sep 21 10:59:44.878: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.757]     Sep 21 10:59:44.878: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.757]     Sep 21 10:59:44.878: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.757]     Sep 21 10:59:44.878: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.758]     Sep 21 10:59:44.880: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.758]     Sep 21 10:59:44.880: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.758]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:44.88
I0921 11:30:20.758]     Sep 21 10:59:46.892: INFO: Kubelet Metrics: []
I0921 11:30:20.758]     Sep 21 10:59:46.910: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.759]     Sep 21 10:59:46.910: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14811811840
I0921 11:30:20.759]     Sep 21 10:59:46.910: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.759]     Sep 21 10:59:46.910: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.759]     Sep 21 10:59:46.910: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.759]     Sep 21 10:59:46.910: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.759]     Sep 21 10:59:46.910: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.760]     Sep 21 10:59:46.913: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.760]     Sep 21 10:59:46.913: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.760]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:46.913
I0921 11:30:20.760]     Sep 21 10:59:48.925: INFO: Kubelet Metrics: []
I0921 11:30:20.760]     Sep 21 10:59:48.938: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.760]     Sep 21 10:59:48.939: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.761]     Sep 21 10:59:48.939: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.761]     Sep 21 10:59:48.939: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.761]     Sep 21 10:59:48.939: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.761]     Sep 21 10:59:48.939: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.761]     Sep 21 10:59:48.939: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.761]     Sep 21 10:59:48.941: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.762]     Sep 21 10:59:48.941: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.762]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:48.941
I0921 11:30:20.762]     Sep 21 10:59:50.959: INFO: Kubelet Metrics: []
I0921 11:30:20.762]     Sep 21 10:59:50.976: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.762]     Sep 21 10:59:50.976: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.763]     Sep 21 10:59:50.976: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.763]     Sep 21 10:59:50.976: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.763]     Sep 21 10:59:50.976: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.763]     Sep 21 10:59:50.976: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.763]     Sep 21 10:59:50.976: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.764]     Sep 21 10:59:50.979: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.764]     Sep 21 10:59:50.979: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.764]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:50.979
I0921 11:30:20.764]     Sep 21 10:59:52.991: INFO: Kubelet Metrics: []
I0921 11:30:20.764]     Sep 21 10:59:53.003: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.764]     Sep 21 10:59:53.003: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.765]     Sep 21 10:59:53.003: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.765]     Sep 21 10:59:53.003: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.765]     Sep 21 10:59:53.003: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.765]     Sep 21 10:59:53.003: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.765]     Sep 21 10:59:53.003: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.765]     Sep 21 10:59:53.005: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.766]     Sep 21 10:59:53.005: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.766]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:53.005
I0921 11:30:20.766]     Sep 21 10:59:55.019: INFO: Kubelet Metrics: []
I0921 11:30:20.766]     Sep 21 10:59:55.031: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.766]     Sep 21 10:59:55.031: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.766]     Sep 21 10:59:55.031: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.766]     Sep 21 10:59:55.031: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.767]     Sep 21 10:59:55.031: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.767]     Sep 21 10:59:55.031: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.767]     Sep 21 10:59:55.031: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.767]     Sep 21 10:59:55.033: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.767]     Sep 21 10:59:55.033: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.768]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:55.033
I0921 11:30:20.768]     Sep 21 10:59:57.056: INFO: Kubelet Metrics: []
I0921 11:30:20.768]     Sep 21 10:59:57.073: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.768]     Sep 21 10:59:57.074: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.768]     Sep 21 10:59:57.074: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.769]     Sep 21 10:59:57.074: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.769]     Sep 21 10:59:57.074: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.769]     Sep 21 10:59:57.074: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.769]     Sep 21 10:59:57.074: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.769]     Sep 21 10:59:57.076: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.769]     Sep 21 10:59:57.076: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.770]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:57.076
I0921 11:30:20.770]     Sep 21 10:59:59.087: INFO: Kubelet Metrics: []
I0921 11:30:20.770]     Sep 21 10:59:59.098: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.770]     Sep 21 10:59:59.098: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.770]     Sep 21 10:59:59.098: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.771]     Sep 21 10:59:59.098: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.771]     Sep 21 10:59:59.098: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.771]     Sep 21 10:59:59.098: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.771]     Sep 21 10:59:59.098: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.771]     Sep 21 10:59:59.101: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.772]     Sep 21 10:59:59.101: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.772]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 10:59:59.101
I0921 11:30:20.772]     Sep 21 11:00:01.114: INFO: Kubelet Metrics: []
I0921 11:30:20.772]     Sep 21 11:00:01.126: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.772]     Sep 21 11:00:01.126: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.773]     Sep 21 11:00:01.126: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.773]     Sep 21 11:00:01.126: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.773]     Sep 21 11:00:01.126: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.773]     Sep 21 11:00:01.126: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.773]     Sep 21 11:00:01.126: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.774]     Sep 21 11:00:01.129: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.774]     Sep 21 11:00:01.129: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.774]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:01.129
I0921 11:30:20.774]     Sep 21 11:00:03.143: INFO: Kubelet Metrics: []
I0921 11:30:20.774]     Sep 21 11:00:03.155: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.775]     Sep 21 11:00:03.155: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.775]     Sep 21 11:00:03.155: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.775]     Sep 21 11:00:03.155: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.775]     Sep 21 11:00:03.155: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.775]     Sep 21 11:00:03.155: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.775]     Sep 21 11:00:03.155: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.776]     Sep 21 11:00:03.157: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.776]     Sep 21 11:00:03.157: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.776]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:03.157
I0921 11:30:20.776]     Sep 21 11:00:05.175: INFO: Kubelet Metrics: []
I0921 11:30:20.776]     Sep 21 11:00:05.194: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.777]     Sep 21 11:00:05.194: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14813102080
I0921 11:30:20.777]     Sep 21 11:00:05.194: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.777]     Sep 21 11:00:05.194: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.777]     Sep 21 11:00:05.194: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.778]     Sep 21 11:00:05.194: INFO: Pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod
I0921 11:30:20.778]     Sep 21 11:00:05.194: INFO: --- summary Volume: test-volume UsedBytes: 134152192
I0921 11:30:20.778]     Sep 21 11:00:05.197: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.778]     Sep 21 11:00:05.197: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.779]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:05.197
I0921 11:30:20.779]     Sep 21 11:00:07.218: INFO: Kubelet Metrics: []
I0921 11:30:20.779]     Sep 21 11:00:07.238: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.779]     Sep 21 11:00:07.238: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.779]     Sep 21 11:00:07.238: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.780]     Sep 21 11:00:07.238: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.780]     Sep 21 11:00:07.238: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.780]     Sep 21 11:00:07.241: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.780]     Sep 21 11:00:07.241: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.780]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:07.241
I0921 11:30:20.781]     STEP: making sure pressure from test has surfaced before continuing 09/21/22 11:00:07.241
I0921 11:30:20.781]     STEP: Waiting for NodeCondition: NoPressure to no longer exist on the node 09/21/22 11:00:27.241
I0921 11:30:20.781]     Sep 21 11:00:27.253: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.781]     Sep 21 11:00:27.253: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.781]     Sep 21 11:00:27.253: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.782]     Sep 21 11:00:27.253: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
... skipping 3 lines ...
I0921 11:30:20.782]     Sep 21 11:00:27.276: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.783]     Sep 21 11:00:27.276: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.783]     Sep 21 11:00:27.276: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.783]     Sep 21 11:00:27.276: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.783]     Sep 21 11:00:27.276: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.783]     Sep 21 11:00:27.286: INFO: Kubelet Metrics: []
I0921 11:30:20.783]     Sep 21 11:00:27.289: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.784]     Sep 21 11:00:27.289: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.784]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:27.289
I0921 11:30:20.784]     Sep 21 11:00:29.302: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.784]     Sep 21 11:00:29.302: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.784]     Sep 21 11:00:29.302: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.785]     Sep 21 11:00:29.302: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.785]     Sep 21 11:00:29.302: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.785]     Sep 21 11:00:29.312: INFO: Kubelet Metrics: []
I0921 11:30:20.785]     Sep 21 11:00:29.315: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.786]     Sep 21 11:00:29.315: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.786]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:29.315
I0921 11:30:20.786]     Sep 21 11:00:31.331: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.786]     Sep 21 11:00:31.331: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.786]     Sep 21 11:00:31.331: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.787]     Sep 21 11:00:31.331: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.787]     Sep 21 11:00:31.331: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.787]     Sep 21 11:00:31.341: INFO: Kubelet Metrics: []
I0921 11:30:20.787]     Sep 21 11:00:31.343: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.788]     Sep 21 11:00:31.344: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.788]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:31.344
I0921 11:30:20.788]     Sep 21 11:00:33.358: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.788]     Sep 21 11:00:33.358: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.788]     Sep 21 11:00:33.358: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.789]     Sep 21 11:00:33.358: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.789]     Sep 21 11:00:33.358: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.789]     Sep 21 11:00:33.389: INFO: Kubelet Metrics: []
I0921 11:30:20.789]     Sep 21 11:00:33.392: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.789]     Sep 21 11:00:33.392: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.790]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:33.392
I0921 11:30:20.790]     Sep 21 11:00:35.405: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.790]     Sep 21 11:00:35.405: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947278848
I0921 11:30:20.790]     Sep 21 11:00:35.405: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.790]     Sep 21 11:00:35.405: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.791]     Sep 21 11:00:35.405: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.791]     Sep 21 11:00:35.416: INFO: Kubelet Metrics: []
I0921 11:30:20.791]     Sep 21 11:00:35.419: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.791]     Sep 21 11:00:35.419: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.791]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:35.419
I0921 11:30:20.792]     Sep 21 11:00:37.434: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.792]     Sep 21 11:00:37.434: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.792]     Sep 21 11:00:37.434: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.792]     Sep 21 11:00:37.434: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.793]     Sep 21 11:00:37.434: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.793]     Sep 21 11:00:37.443: INFO: Kubelet Metrics: []
I0921 11:30:20.793]     Sep 21 11:00:37.446: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.793]     Sep 21 11:00:37.446: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.794]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:37.446
I0921 11:30:20.794]     Sep 21 11:00:39.458: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.794]     Sep 21 11:00:39.458: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.794]     Sep 21 11:00:39.458: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.795]     Sep 21 11:00:39.458: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.795]     Sep 21 11:00:39.458: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.795]     Sep 21 11:00:39.476: INFO: Kubelet Metrics: []
I0921 11:30:20.795]     Sep 21 11:00:39.482: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.795]     Sep 21 11:00:39.482: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.796]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:39.482
I0921 11:30:20.796]     Sep 21 11:00:41.495: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.796]     Sep 21 11:00:41.495: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.796]     Sep 21 11:00:41.495: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.796]     Sep 21 11:00:41.495: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.797]     Sep 21 11:00:41.495: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.797]     Sep 21 11:00:41.506: INFO: Kubelet Metrics: []
I0921 11:30:20.797]     Sep 21 11:00:41.508: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.797]     Sep 21 11:00:41.508: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.797]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:41.508
I0921 11:30:20.798]     Sep 21 11:00:43.524: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.798]     Sep 21 11:00:43.524: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.798]     Sep 21 11:00:43.524: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.798]     Sep 21 11:00:43.524: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.798]     Sep 21 11:00:43.524: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.799]     Sep 21 11:00:43.540: INFO: Kubelet Metrics: []
I0921 11:30:20.799]     Sep 21 11:00:43.548: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.799]     Sep 21 11:00:43.548: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.799]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:43.548
I0921 11:30:20.799]     Sep 21 11:00:45.560: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.800]     Sep 21 11:00:45.560: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.800]     Sep 21 11:00:45.560: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.800]     Sep 21 11:00:45.560: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.800]     Sep 21 11:00:45.560: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.800]     Sep 21 11:00:45.570: INFO: Kubelet Metrics: []
I0921 11:30:20.801]     Sep 21 11:00:45.573: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.801]     Sep 21 11:00:45.573: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.801]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:45.573
I0921 11:30:20.801]     Sep 21 11:00:47.585: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.801]     Sep 21 11:00:47.585: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.801]     Sep 21 11:00:47.585: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.802]     Sep 21 11:00:47.585: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.802]     Sep 21 11:00:47.585: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.802]     Sep 21 11:00:47.596: INFO: Kubelet Metrics: []
I0921 11:30:20.802]     Sep 21 11:00:47.598: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.802]     Sep 21 11:00:47.598: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.803]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:47.598
I0921 11:30:20.803]     Sep 21 11:00:49.614: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.803]     Sep 21 11:00:49.614: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.803]     Sep 21 11:00:49.614: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.803]     Sep 21 11:00:49.614: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.804]     Sep 21 11:00:49.614: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.804]     Sep 21 11:00:49.625: INFO: Kubelet Metrics: []
I0921 11:30:20.804]     Sep 21 11:00:49.627: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.804]     Sep 21 11:00:49.627: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.804]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:49.627
I0921 11:30:20.804]     Sep 21 11:00:51.642: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.805]     Sep 21 11:00:51.642: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.805]     Sep 21 11:00:51.642: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.805]     Sep 21 11:00:51.642: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.805]     Sep 21 11:00:51.642: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.805]     Sep 21 11:00:51.661: INFO: Kubelet Metrics: []
I0921 11:30:20.806]     Sep 21 11:00:51.663: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.806]     Sep 21 11:00:51.663: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.806]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:51.663
I0921 11:30:20.806]     Sep 21 11:00:53.676: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.806]     Sep 21 11:00:53.676: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.807]     Sep 21 11:00:53.676: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.807]     Sep 21 11:00:53.676: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.807]     Sep 21 11:00:53.676: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.807]     Sep 21 11:00:53.693: INFO: Kubelet Metrics: []
I0921 11:30:20.807]     Sep 21 11:00:53.701: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.807]     Sep 21 11:00:53.701: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.808]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:53.701
I0921 11:30:20.808]     Sep 21 11:00:55.717: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.808]     Sep 21 11:00:55.717: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.808]     Sep 21 11:00:55.717: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.808]     Sep 21 11:00:55.717: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.808]     Sep 21 11:00:55.717: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.808]     Sep 21 11:00:55.728: INFO: Kubelet Metrics: []
I0921 11:30:20.809]     Sep 21 11:00:55.731: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.809]     Sep 21 11:00:55.731: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.809]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:55.731
I0921 11:30:20.809]     Sep 21 11:00:57.743: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.809]     Sep 21 11:00:57.743: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.810]     Sep 21 11:00:57.743: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.810]     Sep 21 11:00:57.743: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.810]     Sep 21 11:00:57.743: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.810]     Sep 21 11:00:57.755: INFO: Kubelet Metrics: []
I0921 11:30:20.810]     Sep 21 11:00:57.757: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.811]     Sep 21 11:00:57.757: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.811]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:57.757
I0921 11:30:20.811]     Sep 21 11:00:59.770: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.811]     Sep 21 11:00:59.770: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.812]     Sep 21 11:00:59.770: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.812]     Sep 21 11:00:59.770: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.812]     Sep 21 11:00:59.770: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.812]     Sep 21 11:00:59.781: INFO: Kubelet Metrics: []
I0921 11:30:20.812]     Sep 21 11:00:59.783: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.813]     Sep 21 11:00:59.784: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.813]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:00:59.784
I0921 11:30:20.813]     Sep 21 11:01:01.800: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.813]     Sep 21 11:01:01.800: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.813]     Sep 21 11:01:01.800: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.814]     Sep 21 11:01:01.800: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.814]     Sep 21 11:01:01.800: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.814]     Sep 21 11:01:01.811: INFO: Kubelet Metrics: []
I0921 11:30:20.814]     Sep 21 11:01:01.813: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.814]     Sep 21 11:01:01.813: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.814]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:01.813
I0921 11:30:20.815]     Sep 21 11:01:03.827: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.815]     Sep 21 11:01:03.827: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.815]     Sep 21 11:01:03.827: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.815]     Sep 21 11:01:03.827: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.815]     Sep 21 11:01:03.827: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.816]     Sep 21 11:01:03.851: INFO: Kubelet Metrics: []
I0921 11:30:20.816]     Sep 21 11:01:03.855: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.816]     Sep 21 11:01:03.855: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.816]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:03.855
I0921 11:30:20.816]     Sep 21 11:01:05.869: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.817]     Sep 21 11:01:05.869: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.817]     Sep 21 11:01:05.869: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.817]     Sep 21 11:01:05.869: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.817]     Sep 21 11:01:05.869: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.817]     Sep 21 11:01:05.881: INFO: Kubelet Metrics: []
I0921 11:30:20.817]     Sep 21 11:01:05.884: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.818]     Sep 21 11:01:05.884: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.818]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:05.884
I0921 11:30:20.818]     Sep 21 11:01:07.900: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.818]     Sep 21 11:01:07.900: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.819]     Sep 21 11:01:07.900: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.819]     Sep 21 11:01:07.900: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.819]     Sep 21 11:01:07.900: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.819]     Sep 21 11:01:07.911: INFO: Kubelet Metrics: []
I0921 11:30:20.819]     Sep 21 11:01:07.913: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.819]     Sep 21 11:01:07.913: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.820]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:07.913
I0921 11:30:20.820]     Sep 21 11:01:09.926: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.820]     Sep 21 11:01:09.926: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.820]     Sep 21 11:01:09.926: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.820]     Sep 21 11:01:09.926: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.820]     Sep 21 11:01:09.926: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.821]     Sep 21 11:01:09.938: INFO: Kubelet Metrics: []
I0921 11:30:20.821]     Sep 21 11:01:09.940: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.821]     Sep 21 11:01:09.940: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.821]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:09.94
I0921 11:30:20.821]     Sep 21 11:01:11.962: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.822]     Sep 21 11:01:11.962: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.822]     Sep 21 11:01:11.962: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.822]     Sep 21 11:01:11.962: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.822]     Sep 21 11:01:11.962: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.822]     Sep 21 11:01:11.975: INFO: Kubelet Metrics: []
I0921 11:30:20.822]     Sep 21 11:01:11.978: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.823]     Sep 21 11:01:11.978: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.823]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:11.978
I0921 11:30:20.823]     Sep 21 11:01:13.993: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.823]     Sep 21 11:01:13.993: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.823]     Sep 21 11:01:13.993: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.824]     Sep 21 11:01:13.993: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.824]     Sep 21 11:01:13.993: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.824]     Sep 21 11:01:14.004: INFO: Kubelet Metrics: []
I0921 11:30:20.824]     Sep 21 11:01:14.007: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.824]     Sep 21 11:01:14.007: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.824]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:14.007
I0921 11:30:20.825]     Sep 21 11:01:16.019: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.825]     Sep 21 11:01:16.019: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.825]     Sep 21 11:01:16.019: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.825]     Sep 21 11:01:16.019: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.825]     Sep 21 11:01:16.019: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.825]     Sep 21 11:01:16.032: INFO: Kubelet Metrics: []
I0921 11:30:20.826]     Sep 21 11:01:16.037: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.826]     Sep 21 11:01:16.037: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.826]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:16.037
I0921 11:30:20.826]     Sep 21 11:01:18.052: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.826]     Sep 21 11:01:18.052: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.826]     Sep 21 11:01:18.052: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.827]     Sep 21 11:01:18.052: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.827]     Sep 21 11:01:18.052: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.827]     Sep 21 11:01:18.064: INFO: Kubelet Metrics: []
I0921 11:30:20.827]     Sep 21 11:01:18.066: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.827]     Sep 21 11:01:18.066: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.828]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:18.066
I0921 11:30:20.828]     Sep 21 11:01:20.082: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.828]     Sep 21 11:01:20.082: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.828]     Sep 21 11:01:20.082: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.828]     Sep 21 11:01:20.082: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.828]     Sep 21 11:01:20.082: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.829]     Sep 21 11:01:20.093: INFO: Kubelet Metrics: []
I0921 11:30:20.829]     Sep 21 11:01:20.096: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.829]     Sep 21 11:01:20.096: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.829]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:20.096
I0921 11:30:20.829]     Sep 21 11:01:22.108: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.829]     Sep 21 11:01:22.108: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.830]     Sep 21 11:01:22.108: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.830]     Sep 21 11:01:22.108: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.830]     Sep 21 11:01:22.108: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.830]     Sep 21 11:01:22.119: INFO: Kubelet Metrics: []
I0921 11:30:20.830]     Sep 21 11:01:22.121: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.830]     Sep 21 11:01:22.121: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.831]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:22.122
I0921 11:30:20.831]     Sep 21 11:01:24.133: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.831]     Sep 21 11:01:24.133: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.831]     Sep 21 11:01:24.133: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.831]     Sep 21 11:01:24.133: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.832]     Sep 21 11:01:24.133: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.832]     Sep 21 11:01:24.145: INFO: Kubelet Metrics: []
I0921 11:30:20.832]     Sep 21 11:01:24.147: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.832]     Sep 21 11:01:24.147: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.832]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:24.147
I0921 11:30:20.833]     Sep 21 11:01:26.160: INFO: imageFsInfo.CapacityBytes: 20926410752, imageFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.833]     Sep 21 11:01:26.160: INFO: rootFsInfo.CapacityBytes: 20926410752, rootFsInfo.AvailableBytes: 14947471360
I0921 11:30:20.833]     Sep 21 11:01:26.160: INFO: Pod: emptydir-concealed-disk-under-sizelimit-quotas-true-pod
I0921 11:30:20.833]     Sep 21 11:01:26.160: INFO: --- summary Container: emptydir-concealed-disk-under-sizelimit-quotas-true-container UsedBytes: 0
I0921 11:30:20.833]     Sep 21 11:01:26.160: INFO: --- summary Volume: test-volume UsedBytes: 67043328
I0921 11:30:20.834]     Sep 21 11:01:26.181: INFO: Kubelet Metrics: []
I0921 11:30:20.834]     Sep 21 11:01:26.184: INFO: fetching pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod; phase= Failed
I0921 11:30:20.834]     Sep 21 11:01:26.184: INFO: fetching pod emptydir-concealed-disk-under-sizelimit-quotas-true-pod; phase= Running
I0921 11:30:20.834]     STEP: checking eviction ordering and ensuring important pods don't fail 09/21/22 11:01:26.184
I0921 11:30:20.834]     STEP: checking for correctly formatted eviction events 09/21/22 11:01:27.264
I0921 11:30:20.835]     [AfterEach] TOP-LEVEL
I0921 11:30:20.835]       test/e2e_node/eviction_test.go:592
I0921 11:30:20.835]     STEP: deleting pods 09/21/22 11:01:27.267
I0921 11:30:20.835]     STEP: deleting pod: emptydir-concealed-disk-over-sizelimit-quotas-true-pod 09/21/22 11:01:27.267
I0921 11:30:20.835]     Sep 21 11:01:27.272: INFO: Waiting for pod emptydir-concealed-disk-over-sizelimit-quotas-true-pod to disappear
... skipping 53 lines ...
I0921 11:30:20.846] 
I0921 11:30:20.847]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.847]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.847]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.847]     1 loaded units listed.
I0921 11:30:20.847]     , kubelet-20220921T102832
I0921 11:30:20.847]     W0921 11:02:03.445601    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:60670->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.848]     STEP: Starting the kubelet 09/21/22 11:02:03.455
I0921 11:30:20.848]     W0921 11:02:03.490070    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.848]     Sep 21 11:02:08.493: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.849]     Sep 21 11:02:09.495: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.849]     Sep 21 11:02:10.499: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.849]     Sep 21 11:02:11.502: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.849]     Sep 21 11:02:12.505: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.850]     Sep 21 11:02:13.508: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 71 lines ...
I0921 11:30:20.862] 
I0921 11:30:20.862] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.862] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.862] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.862] 1 loaded units listed.
I0921 11:30:20.862] , kubelet-20220921T102832
I0921 11:30:20.863] W0921 11:02:14.644492    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.863] STEP: Starting the kubelet 09/21/22 11:02:14.654
I0921 11:30:20.863] W0921 11:02:14.690773    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.863] Sep 21 11:02:19.705: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.864] Sep 21 11:02:20.709: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.864] Sep 21 11:02:21.711: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.864] Sep 21 11:02:22.714: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.865] Sep 21 11:02:23.717: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.865] Sep 21 11:02:24.720: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 12 lines ...
I0921 11:30:20.867] 
I0921 11:30:20.867] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.868] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.868] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.868] 1 loaded units listed.
I0921 11:30:20.868] , kubelet-20220921T102832
I0921 11:30:20.868] W0921 11:02:33.323937    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:33908->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.869] STEP: Starting the kubelet 09/21/22 11:02:33.334
I0921 11:30:20.869] W0921 11:02:33.392581    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.869] Sep 21 11:02:38.395: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.869] Sep 21 11:02:39.398: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.870] Sep 21 11:02:40.400: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.870] Sep 21 11:02:41.403: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.870] Sep 21 11:02:42.405: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.870] Sep 21 11:02:43.408: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 29 lines ...
I0921 11:30:20.876] 
I0921 11:30:20.876]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.876]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.876]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.876]     1 loaded units listed.
I0921 11:30:20.876]     , kubelet-20220921T102832
I0921 11:30:20.877]     W0921 11:02:14.644492    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.877]     STEP: Starting the kubelet 09/21/22 11:02:14.654
I0921 11:30:20.877]     W0921 11:02:14.690773    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.877]     Sep 21 11:02:19.705: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.878]     Sep 21 11:02:20.709: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.878]     Sep 21 11:02:21.711: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.878]     Sep 21 11:02:22.714: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.879]     Sep 21 11:02:23.717: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.879]     Sep 21 11:02:24.720: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 12 lines ...
I0921 11:30:20.881] 
I0921 11:30:20.882]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.882]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.882]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.882]     1 loaded units listed.
I0921 11:30:20.882]     , kubelet-20220921T102832
I0921 11:30:20.882]     W0921 11:02:33.323937    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:33908->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.883]     STEP: Starting the kubelet 09/21/22 11:02:33.334
I0921 11:30:20.883]     W0921 11:02:33.392581    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.883]     Sep 21 11:02:38.395: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.884]     Sep 21 11:02:39.398: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.884]     Sep 21 11:02:40.400: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.884]     Sep 21 11:02:41.403: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.884]     Sep 21 11:02:42.405: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.885]     Sep 21 11:02:43.408: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 19 lines ...
I0921 11:30:20.888] STEP: Building a namespace api object, basename topology-manager-test 09/21/22 11:02:44.421
I0921 11:30:20.888] Sep 21 11:02:44.429: INFO: Skipping waiting for service account
I0921 11:30:20.888] [It] run Topology Manager policy test suite
I0921 11:30:20.888]   test/e2e_node/topology_manager_test.go:888
I0921 11:30:20.889] STEP: by configuring Topology Manager policy to single-numa-node 09/21/22 11:02:44.446
I0921 11:30:20.889] Sep 21 11:02:44.446: INFO: Configuring topology Manager policy to single-numa-node
I0921 11:30:20.889] Sep 21 11:02:44.446: INFO: failed to find any VF device from [{0000:00:00.0 -1 false false} {0000:00:01.0 -1 false false} {0000:00:01.3 -1 false false} {0000:00:03.0 -1 false false} {0000:00:04.0 -1 false false} {0000:00:05.0 -1 false false}]
I0921 11:30:20.890] Sep 21 11:02:44.447: INFO: New kubelet config is {{ } %!s(bool=true) /tmp/node-e2e-20220921T102832/static-pods3606252740 {1m0s} {10s} {20s}  map[] 0.0.0.0 %!s(int32=10250) %!s(int32=10255) /usr/libexec/kubernetes/kubelet-plugins/volume/exec/  /var/lib/kubelet/pki/kubelet.crt /var/lib/kubelet/pki/kubelet.key []  %!s(bool=false) %!s(bool=false) {{} {%!s(bool=false) {2m0s}} {%!s(bool=true)}} {AlwaysAllow {{5m0s} {30s}}} %!s(int32=5) %!s(int32=10) %!s(int32=5) %!s(int32=10) %!s(bool=true) %!s(bool=false) %!s(int32=10248) 127.0.0.1 %!s(int32=-999)  [] {4h0m0s} {10s} {5m0s} %!s(int32=40) {2m0s} %!s(int32=85) %!s(int32=80) {10s} /system.slice/kubelet.service  / %!s(bool=true) systemd static map[] {1s} None single-numa-node container map[] {2m0s} promiscuous-bridge %!s(int32=110) 10.100.0.0/24 %!s(int64=-1) /etc/resolv.conf %!s(bool=false) %!s(bool=true) {100ms} %!s(int64=1000000) %!s(int32=50) application/vnd.kubernetes.protobuf %!s(int32=5) %!s(int32=10) %!s(bool=false) map[memory.available:250Mi nodefs.available:10% nodefs.inodesFree:5%] map[] map[] {30s} %!s(int32=0) map[nodefs.available:5% nodefs.inodesFree:5%] %!s(int32=0) %!s(bool=true) %!s(bool=false) %!s(bool=true) %!s(int32=14) %!s(int32=15) map[CPUManager:%!s(bool=true) LocalStorageCapacityIsolation:%!s(bool=true) TopologyManager:%!s(bool=true)] %!s(bool=true) {} 10Mi %!s(int32=5) Watch [] %!s(bool=false) map[] map[cpu:200m]   [pods]   {text 5s %!s(v1.VerbosityLevel=4) [] {{%!s(bool=false) {{{%!s(int64=0) %!s(resource.Scale=0)} {%!s(*inf.Dec=<nil>)} 0 DecimalSI}}}}} %!s(bool=true) {0s} {0s} [] [] %!s(bool=true) %!s(bool=true) %!s(bool=false) %!s(*float64=0xc0012fe300) [] %!s(bool=true) %!s(*v1.TracingConfiguration=<nil>) %!s(bool=true)}
I0921 11:30:20.891] STEP: Stopping the kubelet 09/21/22 11:02:44.447
I0921 11:30:20.891] Sep 21 11:02:44.494: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
I0921 11:30:20.891]   kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
I0921 11:30:20.891] 
I0921 11:30:20.892] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.892] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.892] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.892] 1 loaded units listed.
I0921 11:30:20.892] , kubelet-20220921T102832
I0921 11:30:20.893] W0921 11:02:44.589943    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:45092->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.893] STEP: Starting the kubelet 09/21/22 11:02:44.599
I0921 11:30:20.893] W0921 11:02:44.655864    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.893] Sep 21 11:02:49.659: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.894] Sep 21 11:02:50.662: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.894] Sep 21 11:02:51.665: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.894] Sep 21 11:02:52.668: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.895] Sep 21 11:02:53.672: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.895] Sep 21 11:02:54.674: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 7 lines ...
I0921 11:30:20.897] 
I0921 11:30:20.897] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.897] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.897] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.897] 1 loaded units listed.
I0921 11:30:20.897] , kubelet-20220921T102832
I0921 11:30:20.898] W0921 11:02:55.828940    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:54950->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.898] STEP: Starting the kubelet 09/21/22 11:02:55.84
I0921 11:30:20.898] W0921 11:02:55.895082    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.898] Sep 21 11:03:00.901: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.899] Sep 21 11:03:01.904: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.899] Sep 21 11:03:02.907: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.899] Sep 21 11:03:03.909: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.900] Sep 21 11:03:04.913: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.900] Sep 21 11:03:05.916: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 19 lines ...
I0921 11:30:20.903]     STEP: Building a namespace api object, basename topology-manager-test 09/21/22 11:02:44.421
I0921 11:30:20.903]     Sep 21 11:02:44.429: INFO: Skipping waiting for service account
I0921 11:30:20.903]     [It] run Topology Manager policy test suite
I0921 11:30:20.903]       test/e2e_node/topology_manager_test.go:888
I0921 11:30:20.903]     STEP: by configuring Topology Manager policy to single-numa-node 09/21/22 11:02:44.446
I0921 11:30:20.904]     Sep 21 11:02:44.446: INFO: Configuring topology Manager policy to single-numa-node
I0921 11:30:20.904]     Sep 21 11:02:44.446: INFO: failed to find any VF device from [{0000:00:00.0 -1 false false} {0000:00:01.0 -1 false false} {0000:00:01.3 -1 false false} {0000:00:03.0 -1 false false} {0000:00:04.0 -1 false false} {0000:00:05.0 -1 false false}]
I0921 11:30:20.905]     Sep 21 11:02:44.447: INFO: New kubelet config is {{ } %!s(bool=true) /tmp/node-e2e-20220921T102832/static-pods3606252740 {1m0s} {10s} {20s}  map[] 0.0.0.0 %!s(int32=10250) %!s(int32=10255) /usr/libexec/kubernetes/kubelet-plugins/volume/exec/  /var/lib/kubelet/pki/kubelet.crt /var/lib/kubelet/pki/kubelet.key []  %!s(bool=false) %!s(bool=false) {{} {%!s(bool=false) {2m0s}} {%!s(bool=true)}} {AlwaysAllow {{5m0s} {30s}}} %!s(int32=5) %!s(int32=10) %!s(int32=5) %!s(int32=10) %!s(bool=true) %!s(bool=false) %!s(int32=10248) 127.0.0.1 %!s(int32=-999)  [] {4h0m0s} {10s} {5m0s} %!s(int32=40) {2m0s} %!s(int32=85) %!s(int32=80) {10s} /system.slice/kubelet.service  / %!s(bool=true) systemd static map[] {1s} None single-numa-node container map[] {2m0s} promiscuous-bridge %!s(int32=110) 10.100.0.0/24 %!s(int64=-1) /etc/resolv.conf %!s(bool=false) %!s(bool=true) {100ms} %!s(int64=1000000) %!s(int32=50) application/vnd.kubernetes.protobuf %!s(int32=5) %!s(int32=10) %!s(bool=false) map[memory.available:250Mi nodefs.available:10% nodefs.inodesFree:5%] map[] map[] {30s} %!s(int32=0) map[nodefs.available:5% nodefs.inodesFree:5%] %!s(int32=0) %!s(bool=true) %!s(bool=false) %!s(bool=true) %!s(int32=14) %!s(int32=15) map[CPUManager:%!s(bool=true) LocalStorageCapacityIsolation:%!s(bool=true) TopologyManager:%!s(bool=true)] %!s(bool=true) {} 10Mi %!s(int32=5) Watch [] %!s(bool=false) map[] map[cpu:200m]   [pods]   {text 5s %!s(v1.VerbosityLevel=4) [] {{%!s(bool=false) {{{%!s(int64=0) %!s(resource.Scale=0)} {%!s(*inf.Dec=<nil>)} 0 DecimalSI}}}}} %!s(bool=true) {0s} {0s} [] [] %!s(bool=true) %!s(bool=true) %!s(bool=false) %!s(*float64=0xc0012fe300) [] %!s(bool=true) %!s(*v1.TracingConfiguration=<nil>) %!s(bool=true)}
I0921 11:30:20.906]     STEP: Stopping the kubelet 09/21/22 11:02:44.447
I0921 11:30:20.906]     Sep 21 11:02:44.494: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
I0921 11:30:20.906]       kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
I0921 11:30:20.906] 
I0921 11:30:20.907]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.907]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.907]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.907]     1 loaded units listed.
I0921 11:30:20.907]     , kubelet-20220921T102832
I0921 11:30:20.907]     W0921 11:02:44.589943    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:45092->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.908]     STEP: Starting the kubelet 09/21/22 11:02:44.599
I0921 11:30:20.908]     W0921 11:02:44.655864    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.908]     Sep 21 11:02:49.659: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.909]     Sep 21 11:02:50.662: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.909]     Sep 21 11:02:51.665: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.909]     Sep 21 11:02:52.668: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.910]     Sep 21 11:02:53.672: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.910]     Sep 21 11:02:54.674: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 7 lines ...
I0921 11:30:20.911] 
I0921 11:30:20.912]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.912]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.912]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.912]     1 loaded units listed.
I0921 11:30:20.912]     , kubelet-20220921T102832
I0921 11:30:20.912]     W0921 11:02:55.828940    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:54950->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.913]     STEP: Starting the kubelet 09/21/22 11:02:55.84
I0921 11:30:20.913]     W0921 11:02:55.895082    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.913]     Sep 21 11:03:00.901: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.913]     Sep 21 11:03:01.904: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.914]     Sep 21 11:03:02.907: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.914]     Sep 21 11:03:03.909: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.914]     Sep 21 11:03:04.913: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:20.915]     Sep 21 11:03:05.916: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 26 lines ...
I0921 11:30:20.919] 
I0921 11:30:20.919] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.920] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.920] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.920] 1 loaded units listed.
I0921 11:30:20.920] , kubelet-20220921T102832
I0921 11:30:20.920] W0921 11:03:07.105960    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41618->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.920] STEP: Starting the kubelet 09/21/22 11:03:07.118
I0921 11:30:20.921] W0921 11:03:07.169607    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.921] Sep 21 11:03:12.175: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.921] Sep 21 11:03:13.178: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.922] Sep 21 11:03:14.181: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.922] Sep 21 11:03:15.184: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.922] Sep 21 11:03:16.187: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.922] Sep 21 11:03:17.189: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 23 lines ...
I0921 11:30:20.928] 
I0921 11:30:20.928] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.928] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.928] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.928] 1 loaded units listed.
I0921 11:30:20.928] , kubelet-20220921T102832
I0921 11:30:20.929] W0921 11:03:28.368970    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:52764->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.929] STEP: Starting the kubelet 09/21/22 11:03:28.378
I0921 11:30:20.929] W0921 11:03:28.431009    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.929] Sep 21 11:03:33.435: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.930] Sep 21 11:03:34.438: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.930] Sep 21 11:03:35.441: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.930] Sep 21 11:03:36.444: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.930] Sep 21 11:03:37.446: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.931] Sep 21 11:03:38.449: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 26 lines ...
I0921 11:30:20.936] 
I0921 11:30:20.936]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.936]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.936]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.936]     1 loaded units listed.
I0921 11:30:20.936]     , kubelet-20220921T102832
I0921 11:30:20.936]     W0921 11:03:07.105960    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41618->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.937]     STEP: Starting the kubelet 09/21/22 11:03:07.118
I0921 11:30:20.937]     W0921 11:03:07.169607    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.937]     Sep 21 11:03:12.175: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.937]     Sep 21 11:03:13.178: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.938]     Sep 21 11:03:14.181: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.938]     Sep 21 11:03:15.184: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.938]     Sep 21 11:03:16.187: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.939]     Sep 21 11:03:17.189: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 23 lines ...
I0921 11:30:20.944] 
I0921 11:30:20.944]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:20.944]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:20.944]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:20.945]     1 loaded units listed.
I0921 11:30:20.945]     , kubelet-20220921T102832
I0921 11:30:20.945]     W0921 11:03:28.368970    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:52764->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:20.945]     STEP: Starting the kubelet 09/21/22 11:03:28.378
I0921 11:30:20.945]     W0921 11:03:28.431009    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:20.946]     Sep 21 11:03:33.435: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.946]     Sep 21 11:03:34.438: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.946]     Sep 21 11:03:35.441: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.947]     Sep 21 11:03:36.444: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.947]     Sep 21 11:03:37.446: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:20.947]     Sep 21 11:03:38.449: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 13 lines ...
I0921 11:30:20.950] STEP: Creating a kubernetes client 09/21/22 11:03:39.461
I0921 11:30:20.950] STEP: Building a namespace api object, basename downward-api 09/21/22 11:03:39.461
I0921 11:30:20.950] Sep 21 11:03:39.467: INFO: Skipping waiting for service account
I0921 11:30:20.950] [It] should provide default limits.hugepages-<pagesize> from node allocatable
I0921 11:30:20.950]   test/e2e/common/node/downwardapi.go:348
I0921 11:30:20.951] STEP: Creating a pod to test downward api env vars 09/21/22 11:03:39.467
I0921 11:30:20.951] Sep 21 11:03:39.479: INFO: Waiting up to 5m0s for pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd" in namespace "downward-api-3631" to be "Succeeded or Failed"
I0921 11:30:20.951] Sep 21 11:03:39.481: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.620288ms
I0921 11:30:20.952] Sep 21 11:03:41.486: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006664779s
I0921 11:30:20.952] Sep 21 11:03:43.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003813026s
I0921 11:30:20.952] Sep 21 11:03:45.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.00359286s
I0921 11:30:20.952] STEP: Saw pod success 09/21/22 11:03:45.483
I0921 11:30:20.953] Sep 21 11:03:45.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd" satisfied condition "Succeeded or Failed"
I0921 11:30:20.953] Sep 21 11:03:45.487: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd container dapi-container: <nil>
I0921 11:30:20.953] STEP: delete the pod 09/21/22 11:03:45.498
I0921 11:30:20.953] Sep 21 11:03:45.502: INFO: Waiting for pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd to disappear
I0921 11:30:20.953] Sep 21 11:03:45.506: INFO: Pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd no longer exists
I0921 11:30:20.954] [DeferCleanup] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages]
I0921 11:30:20.954]   dump namespaces | framework.go:173
... skipping 16 lines ...
I0921 11:30:20.957]     STEP: Creating a kubernetes client 09/21/22 11:03:39.461
I0921 11:30:20.957]     STEP: Building a namespace api object, basename downward-api 09/21/22 11:03:39.461
I0921 11:30:20.957]     Sep 21 11:03:39.467: INFO: Skipping waiting for service account
I0921 11:30:20.957]     [It] should provide default limits.hugepages-<pagesize> from node allocatable
I0921 11:30:20.957]       test/e2e/common/node/downwardapi.go:348
I0921 11:30:20.958]     STEP: Creating a pod to test downward api env vars 09/21/22 11:03:39.467
I0921 11:30:20.958]     Sep 21 11:03:39.479: INFO: Waiting up to 5m0s for pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd" in namespace "downward-api-3631" to be "Succeeded or Failed"
I0921 11:30:20.958]     Sep 21 11:03:39.481: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 1.620288ms
I0921 11:30:20.958]     Sep 21 11:03:41.486: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006664779s
I0921 11:30:20.959]     Sep 21 11:03:43.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003813026s
I0921 11:30:20.959]     Sep 21 11:03:45.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.00359286s
I0921 11:30:20.959]     STEP: Saw pod success 09/21/22 11:03:45.483
I0921 11:30:20.959]     Sep 21 11:03:45.483: INFO: Pod "downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd" satisfied condition "Succeeded or Failed"
I0921 11:30:20.960]     Sep 21 11:03:45.487: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd container dapi-container: <nil>
I0921 11:30:20.960]     STEP: delete the pod 09/21/22 11:03:45.498
I0921 11:30:20.960]     Sep 21 11:03:45.502: INFO: Waiting for pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd to disappear
I0921 11:30:20.960]     Sep 21 11:03:45.506: INFO: Pod downward-api-04378d4e-64e2-4bb1-8e62-011e37fd58cd no longer exists
I0921 11:30:20.960]     [DeferCleanup] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages]
I0921 11:30:20.961]       dump namespaces | framework.go:173
... skipping 654 lines ...
I0921 11:30:21.082] 
I0921 11:30:21.083] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.083] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.083] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.083] 1 loaded units listed.
I0921 11:30:21.083] , kubelet-20220921T102832
I0921 11:30:21.084] W0921 11:08:29.009976    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:48704->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.084] STEP: Starting the kubelet 09/21/22 11:08:29.023
I0921 11:30:21.084] W0921 11:08:29.080157    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.084] Sep 21 11:08:34.087: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.085] Sep 21 11:08:35.090: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.085] Sep 21 11:08:36.093: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.085] Sep 21 11:08:37.095: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.086] Sep 21 11:08:38.098: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.086] Sep 21 11:08:39.100: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.086] [It] should use unconfined when specified
I0921 11:30:21.086]   test/e2e_node/seccompdefault_test.go:66
I0921 11:30:21.087] STEP: Creating a pod to test SeccompDefault-unconfined 09/21/22 11:08:40.104
I0921 11:30:21.087] Sep 21 11:08:40.112: INFO: Waiting up to 5m0s for pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e" in namespace "seccompdefault-test-2197" to be "Succeeded or Failed"
I0921 11:30:21.087] Sep 21 11:08:40.118: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.678486ms
I0921 11:30:21.087] Sep 21 11:08:42.120: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008054237s
I0921 11:30:21.088] Sep 21 11:08:44.122: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009520607s
I0921 11:30:21.088] Sep 21 11:08:46.121: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009173357s
I0921 11:30:21.088] STEP: Saw pod success 09/21/22 11:08:46.121
I0921 11:30:21.088] Sep 21 11:08:46.121: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e" satisfied condition "Succeeded or Failed"
I0921 11:30:21.088] Sep 21 11:08:46.123: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e container seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e: <nil>
I0921 11:30:21.089] STEP: delete the pod 09/21/22 11:08:46.135
I0921 11:30:21.089] Sep 21 11:08:46.139: INFO: Waiting for pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e to disappear
I0921 11:30:21.089] Sep 21 11:08:46.143: INFO: Pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e no longer exists
I0921 11:30:21.089] [AfterEach] with SeccompDefault enabled
I0921 11:30:21.089]   test/e2e_node/util.go:181
... skipping 3 lines ...
I0921 11:30:21.090] 
I0921 11:30:21.090] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.090] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.090] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.090] 1 loaded units listed.
I0921 11:30:21.091] , kubelet-20220921T102832
I0921 11:30:21.091] W0921 11:08:46.297142    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:35890->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.091] STEP: Starting the kubelet 09/21/22 11:08:46.308
I0921 11:30:21.091] W0921 11:08:46.367053    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.091] Sep 21 11:08:51.373: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.092] Sep 21 11:08:52.377: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.092] Sep 21 11:08:53.379: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.092] Sep 21 11:08:54.383: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.093] Sep 21 11:08:55.386: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.093] Sep 21 11:08:56.389: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 27 lines ...
I0921 11:30:21.097] 
I0921 11:30:21.097]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.098]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.098]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.098]     1 loaded units listed.
I0921 11:30:21.098]     , kubelet-20220921T102832
I0921 11:30:21.098]     W0921 11:08:29.009976    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:48704->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.098]     STEP: Starting the kubelet 09/21/22 11:08:29.023
I0921 11:30:21.099]     W0921 11:08:29.080157    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.099]     Sep 21 11:08:34.087: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.099]     Sep 21 11:08:35.090: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.099]     Sep 21 11:08:36.093: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.100]     Sep 21 11:08:37.095: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.100]     Sep 21 11:08:38.098: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.100]     Sep 21 11:08:39.100: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.100]     [It] should use unconfined when specified
I0921 11:30:21.100]       test/e2e_node/seccompdefault_test.go:66
I0921 11:30:21.100]     STEP: Creating a pod to test SeccompDefault-unconfined 09/21/22 11:08:40.104
I0921 11:30:21.101]     Sep 21 11:08:40.112: INFO: Waiting up to 5m0s for pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e" in namespace "seccompdefault-test-2197" to be "Succeeded or Failed"
I0921 11:30:21.101]     Sep 21 11:08:40.118: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.678486ms
I0921 11:30:21.101]     Sep 21 11:08:42.120: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008054237s
I0921 11:30:21.101]     Sep 21 11:08:44.122: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009520607s
I0921 11:30:21.101]     Sep 21 11:08:46.121: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.009173357s
I0921 11:30:21.102]     STEP: Saw pod success 09/21/22 11:08:46.121
I0921 11:30:21.102]     Sep 21 11:08:46.121: INFO: Pod "seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e" satisfied condition "Succeeded or Failed"
I0921 11:30:21.102]     Sep 21 11:08:46.123: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e container seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e: <nil>
I0921 11:30:21.102]     STEP: delete the pod 09/21/22 11:08:46.135
I0921 11:30:21.102]     Sep 21 11:08:46.139: INFO: Waiting for pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e to disappear
I0921 11:30:21.102]     Sep 21 11:08:46.143: INFO: Pod seccompdefault-test-f1dfbc2d-41e4-4814-aeaa-4a7bfb66065e no longer exists
I0921 11:30:21.102]     [AfterEach] with SeccompDefault enabled
I0921 11:30:21.103]       test/e2e_node/util.go:181
... skipping 3 lines ...
I0921 11:30:21.103] 
I0921 11:30:21.104]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.104]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.104]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.104]     1 loaded units listed.
I0921 11:30:21.104]     , kubelet-20220921T102832
I0921 11:30:21.104]     W0921 11:08:46.297142    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:35890->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.104]     STEP: Starting the kubelet 09/21/22 11:08:46.308
I0921 11:30:21.105]     W0921 11:08:46.367053    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.105]     Sep 21 11:08:51.373: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.105]     Sep 21 11:08:52.377: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.105]     Sep 21 11:08:53.379: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.106]     Sep 21 11:08:54.383: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.106]     Sep 21 11:08:55.386: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.106]     Sep 21 11:08:56.389: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 73 lines ...
I0921 11:30:21.120] 
I0921 11:30:21.120] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.121] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.121] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.121] 1 loaded units listed.
I0921 11:30:21.121] , kubelet-20220921T102832
I0921 11:30:21.121] W0921 11:08:57.649936    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:54338->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.122] STEP: Starting the kubelet 09/21/22 11:08:57.659
I0921 11:30:21.122] W0921 11:08:57.716252    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.122] Sep 21 11:09:02.722: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.123] Sep 21 11:09:03.725: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.123] Sep 21 11:09:04.728: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.123] Sep 21 11:09:05.731: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.124] Sep 21 11:09:06.734: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.124] Sep 21 11:09:07.737: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 63 lines ...
I0921 11:30:21.137] 
I0921 11:30:21.138] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.138] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.138] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.138] 1 loaded units listed.
I0921 11:30:21.138] , kubelet-20220921T102832
I0921 11:30:21.139] W0921 11:09:46.957935    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41174->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.139] STEP: Starting the kubelet 09/21/22 11:09:46.967
I0921 11:30:21.139] W0921 11:09:47.029293    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.140] Sep 21 11:09:52.032: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.140] Sep 21 11:09:53.035: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.140] Sep 21 11:09:54.039: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.141] Sep 21 11:09:55.042: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.141] Sep 21 11:09:56.045: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.142] Sep 21 11:09:57.047: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 32 lines ...
I0921 11:30:21.148] 
I0921 11:30:21.148]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.149]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.149]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.149]     1 loaded units listed.
I0921 11:30:21.149]     , kubelet-20220921T102832
I0921 11:30:21.149]     W0921 11:08:57.649936    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:54338->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.150]     STEP: Starting the kubelet 09/21/22 11:08:57.659
I0921 11:30:21.150]     W0921 11:08:57.716252    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.150]     Sep 21 11:09:02.722: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.151]     Sep 21 11:09:03.725: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.151]     Sep 21 11:09:04.728: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.152]     Sep 21 11:09:05.731: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.152]     Sep 21 11:09:06.734: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.152]     Sep 21 11:09:07.737: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 63 lines ...
I0921 11:30:21.165] 
I0921 11:30:21.166]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.166]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.166]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.166]     1 loaded units listed.
I0921 11:30:21.166]     , kubelet-20220921T102832
I0921 11:30:21.167]     W0921 11:09:46.957935    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41174->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.167]     STEP: Starting the kubelet 09/21/22 11:09:46.967
I0921 11:30:21.167]     W0921 11:09:47.029293    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.167]     Sep 21 11:09:52.032: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.168]     Sep 21 11:09:53.035: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.168]     Sep 21 11:09:54.039: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.168]     Sep 21 11:09:55.042: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.168]     Sep 21 11:09:56.045: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.169]     Sep 21 11:09:57.047: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 23 lines ...
I0921 11:30:21.173] 
I0921 11:30:21.173] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.173] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.173] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.174] 1 loaded units listed.
I0921 11:30:21.174] , kubelet-20220921T102832
I0921 11:30:21.174] W0921 11:09:58.227233    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58892->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.174] STEP: Starting the kubelet 09/21/22 11:09:58.236
I0921 11:30:21.174] W0921 11:09:58.294935    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.175] Sep 21 11:10:03.298: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.175] Sep 21 11:10:04.301: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.175] Sep 21 11:10:05.304: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.176] Sep 21 11:10:06.307: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.176] Sep 21 11:10:07.309: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.176] Sep 21 11:10:08.312: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.177] [It] a pod failing to mount volumes and without init containers should report scheduled and initialized conditions set
I0921 11:30:21.177]   test/e2e_node/pod_conditions_test.go:58
I0921 11:30:21.177] STEP: creating a pod whose sandbox creation is blocked due to a missing volume 09/21/22 11:10:09.314
I0921 11:30:21.177] STEP: waiting until kubelet has started trying to set up the pod and started to fail 09/21/22 11:10:09.322
I0921 11:30:21.177] STEP: checking pod condition for a pod whose sandbox creation is blocked 09/21/22 11:10:11.332
I0921 11:30:21.177] [AfterEach] including PodHasNetwork condition [Serial] [Feature:PodHasNetwork]
I0921 11:30:21.178]   test/e2e_node/util.go:181
I0921 11:30:21.178] STEP: Stopping the kubelet 09/21/22 11:10:11.333
I0921 11:30:21.178] Sep 21 11:10:11.383: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
I0921 11:30:21.179]   kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
I0921 11:30:21.179] 
I0921 11:30:21.179] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.179] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.179] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.179] 1 loaded units listed.
I0921 11:30:21.179] , kubelet-20220921T102832
I0921 11:30:21.180] W0921 11:10:11.494969    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58824->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.180] STEP: Starting the kubelet 09/21/22 11:10:11.507
I0921 11:30:21.180] W0921 11:10:11.565399    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.180] Sep 21 11:10:16.569: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.181] Sep 21 11:10:17.572: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.181] Sep 21 11:10:18.575: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.181] Sep 21 11:10:19.578: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.182] Sep 21 11:10:20.581: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.182] Sep 21 11:10:21.584: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 26 lines ...
I0921 11:30:21.187] 
I0921 11:30:21.187]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.187]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.187]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.187]     1 loaded units listed.
I0921 11:30:21.188]     , kubelet-20220921T102832
I0921 11:30:21.188]     W0921 11:09:58.227233    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58892->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.188]     STEP: Starting the kubelet 09/21/22 11:09:58.236
I0921 11:30:21.188]     W0921 11:09:58.294935    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.189]     Sep 21 11:10:03.298: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.189]     Sep 21 11:10:04.301: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.189]     Sep 21 11:10:05.304: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.190]     Sep 21 11:10:06.307: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.190]     Sep 21 11:10:07.309: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.190]     Sep 21 11:10:08.312: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.190]     [It] a pod failing to mount volumes and without init containers should report scheduled and initialized conditions set
I0921 11:30:21.191]       test/e2e_node/pod_conditions_test.go:58
I0921 11:30:21.191]     STEP: creating a pod whose sandbox creation is blocked due to a missing volume 09/21/22 11:10:09.314
I0921 11:30:21.191]     STEP: waiting until kubelet has started trying to set up the pod and started to fail 09/21/22 11:10:09.322
I0921 11:30:21.191]     STEP: checking pod condition for a pod whose sandbox creation is blocked 09/21/22 11:10:11.332
I0921 11:30:21.191]     [AfterEach] including PodHasNetwork condition [Serial] [Feature:PodHasNetwork]
I0921 11:30:21.192]       test/e2e_node/util.go:181
I0921 11:30:21.192]     STEP: Stopping the kubelet 09/21/22 11:10:11.333
I0921 11:30:21.192]     Sep 21 11:10:11.383: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
I0921 11:30:21.193]       kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
I0921 11:30:21.193] 
I0921 11:30:21.193]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.193]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.193]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.194]     1 loaded units listed.
I0921 11:30:21.194]     , kubelet-20220921T102832
I0921 11:30:21.194]     W0921 11:10:11.494969    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58824->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.194]     STEP: Starting the kubelet 09/21/22 11:10:11.507
I0921 11:30:21.195]     W0921 11:10:11.565399    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.195]     Sep 21 11:10:16.569: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.195]     Sep 21 11:10:17.572: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.196]     Sep 21 11:10:18.575: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.196]     Sep 21 11:10:19.578: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.196]     Sep 21 11:10:20.581: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.197]     Sep 21 11:10:21.584: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 26 lines ...
I0921 11:30:21.201] 
I0921 11:30:21.202] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.202] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.202] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.202] 1 loaded units listed.
I0921 11:30:21.202] , kubelet-20220921T102832
I0921 11:30:21.203] W0921 11:10:22.773961    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:36804->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.203] STEP: Starting the kubelet 09/21/22 11:10:22.783
I0921 11:30:21.203] W0921 11:10:22.839164    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.203] Sep 21 11:10:27.842: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.204] Sep 21 11:10:28.845: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.204] Sep 21 11:10:29.848: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.205] Sep 21 11:10:30.851: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.205] Sep 21 11:10:31.854: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.205] Sep 21 11:10:32.856: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 29 lines ...
I0921 11:30:21.211] 
I0921 11:30:21.211]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.211]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.211]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.211]     1 loaded units listed.
I0921 11:30:21.211]     , kubelet-20220921T102832
I0921 11:30:21.212]     W0921 11:10:22.773961    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:36804->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.212]     STEP: Starting the kubelet 09/21/22 11:10:22.783
I0921 11:30:21.212]     W0921 11:10:22.839164    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.212]     Sep 21 11:10:27.842: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.213]     Sep 21 11:10:28.845: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.213]     Sep 21 11:10:29.848: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.213]     Sep 21 11:10:30.851: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.214]     Sep 21 11:10:31.854: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.214]     Sep 21 11:10:32.856: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 30 lines ...
I0921 11:30:21.219] 
I0921 11:30:21.219] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.220] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.220] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.220] 1 loaded units listed.
I0921 11:30:21.220] , kubelet-20220921T102832
I0921 11:30:21.220] W0921 11:10:34.073133    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:56454->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.220] STEP: Starting the kubelet 09/21/22 11:10:34.082
I0921 11:30:21.221] W0921 11:10:34.143071    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.221] Sep 21 11:10:39.147: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.221] Sep 21 11:10:40.151: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.222] Sep 21 11:10:41.154: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.222] Sep 21 11:10:42.157: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.222] Sep 21 11:10:43.160: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.222] Sep 21 11:10:44.163: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 17 lines ...
I0921 11:30:21.226] 
I0921 11:30:21.226] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.226] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.226] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.226] 1 loaded units listed.
I0921 11:30:21.226] , kubelet-20220921T102832
I0921 11:30:21.227] W0921 11:10:45.341953    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:34048->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.227] STEP: Starting the kubelet 09/21/22 11:10:45.353
I0921 11:30:21.227] W0921 11:10:45.404570    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.227] Sep 21 11:10:50.409: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.228] Sep 21 11:10:51.412: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.228] Sep 21 11:10:52.414: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.228] Sep 21 11:10:53.417: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.228] Sep 21 11:10:54.420: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.229] Sep 21 11:10:55.423: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 32 lines ...
I0921 11:30:21.234] 
I0921 11:30:21.234]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.234]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.235]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.235]     1 loaded units listed.
I0921 11:30:21.235]     , kubelet-20220921T102832
I0921 11:30:21.235]     W0921 11:10:34.073133    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:56454->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.235]     STEP: Starting the kubelet 09/21/22 11:10:34.082
I0921 11:30:21.236]     W0921 11:10:34.143071    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.236]     Sep 21 11:10:39.147: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.236]     Sep 21 11:10:40.151: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.236]     Sep 21 11:10:41.154: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.237]     Sep 21 11:10:42.157: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.237]     Sep 21 11:10:43.160: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.237]     Sep 21 11:10:44.163: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 17 lines ...
I0921 11:30:21.240] 
I0921 11:30:21.240]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.241]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.241]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.241]     1 loaded units listed.
I0921 11:30:21.241]     , kubelet-20220921T102832
I0921 11:30:21.241]     W0921 11:10:45.341953    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:34048->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.241]     STEP: Starting the kubelet 09/21/22 11:10:45.353
I0921 11:30:21.242]     W0921 11:10:45.404570    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.242]     Sep 21 11:10:50.409: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.242]     Sep 21 11:10:51.412: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.242]     Sep 21 11:10:52.414: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.243]     Sep 21 11:10:53.417: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.243]     Sep 21 11:10:54.420: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.243]     Sep 21 11:10:55.423: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 31 lines ...
I0921 11:30:21.249] 
I0921 11:30:21.249] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.249] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.249] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.249] 1 loaded units listed.
I0921 11:30:21.249] , kubelet-20220921T102832
I0921 11:30:21.250] W0921 11:10:56.615953    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:46580->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.250] STEP: Starting the kubelet 09/21/22 11:10:56.627
I0921 11:30:21.250] W0921 11:10:56.679211    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.250] Sep 21 11:11:01.727: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.251] Sep 21 11:11:02.730: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.251] Sep 21 11:11:03.733: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.251] Sep 21 11:11:04.735: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.251] Sep 21 11:11:05.739: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.252] Sep 21 11:11:06.741: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 31 lines ...
I0921 11:30:21.257] 
I0921 11:30:21.257]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.257]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.258]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.258]     1 loaded units listed.
I0921 11:30:21.258]     , kubelet-20220921T102832
I0921 11:30:21.258]     W0921 11:10:56.615953    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:46580->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.258]     STEP: Starting the kubelet 09/21/22 11:10:56.627
I0921 11:30:21.259]     W0921 11:10:56.679211    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.259]     Sep 21 11:11:01.727: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.259]     Sep 21 11:11:02.730: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.259]     Sep 21 11:11:03.733: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.260]     Sep 21 11:11:04.735: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.260]     Sep 21 11:11:05.739: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.261]     Sep 21 11:11:06.741: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 18 lines ...
I0921 11:30:21.264] STEP: Creating a kubernetes client 09/21/22 11:11:07.753
I0921 11:30:21.265] STEP: Building a namespace api object, basename downward-api 09/21/22 11:11:07.753
I0921 11:30:21.265] Sep 21 11:11:07.760: INFO: Skipping waiting for service account
I0921 11:30:21.265] [It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
I0921 11:30:21.265]   test/e2e/common/storage/downwardapi.go:38
I0921 11:30:21.265] STEP: Creating a pod to test downward api env vars 09/21/22 11:11:07.76
I0921 11:30:21.266] Sep 21 11:11:07.768: INFO: Waiting up to 5m0s for pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963" in namespace "downward-api-7241" to be "Succeeded or Failed"
I0921 11:30:21.266] Sep 21 11:11:07.770: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 1.611778ms
I0921 11:30:21.266] Sep 21 11:11:09.772: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003973964s
I0921 11:30:21.266] Sep 21 11:11:11.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004243115s
I0921 11:30:21.267] Sep 21 11:11:13.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.00491682s
I0921 11:30:21.267] STEP: Saw pod success 09/21/22 11:11:13.773
I0921 11:30:21.267] Sep 21 11:11:13.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963" satisfied condition "Succeeded or Failed"
I0921 11:30:21.267] Sep 21 11:11:13.775: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 container dapi-container: <nil>
I0921 11:30:21.267] STEP: delete the pod 09/21/22 11:11:13.789
I0921 11:30:21.268] Sep 21 11:11:13.792: INFO: Waiting for pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 to disappear
I0921 11:30:21.268] Sep 21 11:11:13.797: INFO: Pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 no longer exists
I0921 11:30:21.268] [DeferCleanup] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage]
I0921 11:30:21.268]   dump namespaces | framework.go:173
... skipping 16 lines ...
I0921 11:30:21.271]     STEP: Creating a kubernetes client 09/21/22 11:11:07.753
I0921 11:30:21.271]     STEP: Building a namespace api object, basename downward-api 09/21/22 11:11:07.753
I0921 11:30:21.271]     Sep 21 11:11:07.760: INFO: Skipping waiting for service account
I0921 11:30:21.271]     [It] should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
I0921 11:30:21.272]       test/e2e/common/storage/downwardapi.go:38
I0921 11:30:21.272]     STEP: Creating a pod to test downward api env vars 09/21/22 11:11:07.76
I0921 11:30:21.272]     Sep 21 11:11:07.768: INFO: Waiting up to 5m0s for pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963" in namespace "downward-api-7241" to be "Succeeded or Failed"
I0921 11:30:21.272]     Sep 21 11:11:07.770: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 1.611778ms
I0921 11:30:21.273]     Sep 21 11:11:09.772: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 2.003973964s
I0921 11:30:21.273]     Sep 21 11:11:11.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Pending", Reason="", readiness=false. Elapsed: 4.004243115s
I0921 11:30:21.273]     Sep 21 11:11:13.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.00491682s
I0921 11:30:21.273]     STEP: Saw pod success 09/21/22 11:11:13.773
I0921 11:30:21.273]     Sep 21 11:11:13.773: INFO: Pod "downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963" satisfied condition "Succeeded or Failed"
I0921 11:30:21.274]     Sep 21 11:11:13.775: INFO: Trying to get logs from node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 container dapi-container: <nil>
I0921 11:30:21.274]     STEP: delete the pod 09/21/22 11:11:13.789
I0921 11:30:21.274]     Sep 21 11:11:13.792: INFO: Waiting for pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 to disappear
I0921 11:30:21.274]     Sep 21 11:11:13.797: INFO: Pod downward-api-d870f5c4-1a04-4905-be33-e522c3fe7963 no longer exists
I0921 11:30:21.274]     [DeferCleanup] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage]
I0921 11:30:21.274]       dump namespaces | framework.go:173
... skipping 517 lines ...
I0921 11:30:21.375]     STEP: Destroying namespace "node-label-reconciliation-5927" for this suite. 09/21/22 11:13:49.748
I0921 11:30:21.375]   << End Captured GinkgoWriter Output
I0921 11:30:21.375] ------------------------------
I0921 11:30:21.376] SS
I0921 11:30:21.376] ------------------------------
I0921 11:30:21.376] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources] without SRIOV devices in the system with disabled KubeletPodResourcesGetAllocatable feature gate
I0921 11:30:21.376]   should return the expected error with the feature gate disabled
I0921 11:30:21.376]   test/e2e_node/podresources_test.go:712
I0921 11:30:21.377] [BeforeEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
I0921 11:30:21.377]   set up framework | framework.go:158
I0921 11:30:21.377] STEP: Creating a kubernetes client 09/21/22 11:13:49.755
I0921 11:30:21.377] STEP: Building a namespace api object, basename podresources-test 09/21/22 11:13:49.755
I0921 11:30:21.377] Sep 21 11:13:49.763: INFO: Skipping waiting for service account
... skipping 7 lines ...
I0921 11:30:21.379] 
I0921 11:30:21.379] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.379] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.379] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.379] 1 loaded units listed.
I0921 11:30:21.380] , kubelet-20220921T102832
I0921 11:30:21.380] W0921 11:13:49.944001    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58570->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.380] STEP: Starting the kubelet 09/21/22 11:13:49.953
I0921 11:30:21.380] W0921 11:13:50.016733    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.381] Sep 21 11:13:55.023: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.381] Sep 21 11:13:56.026: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.381] Sep 21 11:13:57.029: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.382] Sep 21 11:13:58.032: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.382] Sep 21 11:13:59.035: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.382] Sep 21 11:14:00.038: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.382] [It] should return the expected error with the feature gate disabled
I0921 11:30:21.383]   test/e2e_node/podresources_test.go:712
I0921 11:30:21.383] STEP: checking GetAllocatableResources fail if the feature gate is not enabled 09/21/22 11:14:01.041
I0921 11:30:21.383] Sep 21 11:14:01.044: INFO: GetAllocatableResources result: nil, err: rpc error: code = Unknown desc = Pod Resources API GetAllocatableResources disabled
I0921 11:30:21.383] [AfterEach] with disabled KubeletPodResourcesGetAllocatable feature gate
I0921 11:30:21.383]   test/e2e_node/util.go:181
I0921 11:30:21.384] STEP: Stopping the kubelet 09/21/22 11:14:01.045
I0921 11:30:21.384] Sep 21 11:14:01.091: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
I0921 11:30:21.384]   kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
I0921 11:30:21.384] 
I0921 11:30:21.385] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.385] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.385] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.385] 1 loaded units listed.
I0921 11:30:21.385] , kubelet-20220921T102832
I0921 11:30:21.385] W0921 11:14:01.192942    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:57372->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.386] STEP: Starting the kubelet 09/21/22 11:14:01.204
I0921 11:30:21.386] W0921 11:14:01.261080    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.386] Sep 21 11:14:06.268: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.387] Sep 21 11:14:07.270: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.387] Sep 21 11:14:08.273: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.387] Sep 21 11:14:09.276: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.388] Sep 21 11:14:10.278: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.388] Sep 21 11:14:11.282: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 8 lines ...
I0921 11:30:21.390] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
I0921 11:30:21.390] test/e2e_node/framework.go:23
I0921 11:30:21.390]   without SRIOV devices in the system
I0921 11:30:21.390]   test/e2e_node/podresources_test.go:643
I0921 11:30:21.390]     with disabled KubeletPodResourcesGetAllocatable feature gate
I0921 11:30:21.390]     test/e2e_node/podresources_test.go:704
I0921 11:30:21.391]       should return the expected error with the feature gate disabled
I0921 11:30:21.391]       test/e2e_node/podresources_test.go:712
I0921 11:30:21.391] 
I0921 11:30:21.391]   Begin Captured GinkgoWriter Output >>
I0921 11:30:21.391]     [BeforeEach] [sig-node] POD Resources [Serial] [Feature:PodResources][NodeFeature:PodResources]
I0921 11:30:21.391]       set up framework | framework.go:158
I0921 11:30:21.392]     STEP: Creating a kubernetes client 09/21/22 11:13:49.755
... skipping 9 lines ...
I0921 11:30:21.393] 
I0921 11:30:21.394]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.394]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.394]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.394]     1 loaded units listed.
I0921 11:30:21.394]     , kubelet-20220921T102832
I0921 11:30:21.394]     W0921 11:13:49.944001    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58570->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.395]     STEP: Starting the kubelet 09/21/22 11:13:49.953
I0921 11:30:21.395]     W0921 11:13:50.016733    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.395]     Sep 21 11:13:55.023: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.396]     Sep 21 11:13:56.026: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.396]     Sep 21 11:13:57.029: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.396]     Sep 21 11:13:58.032: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.397]     Sep 21 11:13:59.035: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.397]     Sep 21 11:14:00.038: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0921 11:30:21.398]     [It] should return the expected error with the feature gate disabled
I0921 11:30:21.398]       test/e2e_node/podresources_test.go:712
I0921 11:30:21.398]     STEP: checking GetAllocatableResources fail if the feature gate is not enabled 09/21/22 11:14:01.041
I0921 11:30:21.398]     Sep 21 11:14:01.044: INFO: GetAllocatableResources result: nil, err: rpc error: code = Unknown desc = Pod Resources API GetAllocatableResources disabled
I0921 11:30:21.399]     [AfterEach] with disabled KubeletPodResourcesGetAllocatable feature gate
I0921 11:30:21.399]       test/e2e_node/util.go:181
I0921 11:30:21.399]     STEP: Stopping the kubelet 09/21/22 11:14:01.045
I0921 11:30:21.399]     Sep 21 11:14:01.091: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
I0921 11:30:21.400]       kubelet-20220921T102832.service loaded active running /tmp/node-e2e-20220921T102832/kubelet --kubeconfig /tmp/node-e2e-20220921T102832/kubeconfig --root-dir /var/lib/kubelet --v 4 --feature-gates LocalStorageCapacityIsolation=true --hostname-override n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220921T102832/kubelet-config --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
I0921 11:30:21.400] 
I0921 11:30:21.400]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.401]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.401]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.401]     1 loaded units listed.
I0921 11:30:21.401]     , kubelet-20220921T102832
I0921 11:30:21.401]     W0921 11:14:01.192942    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:57372->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.401]     STEP: Starting the kubelet 09/21/22 11:14:01.204
I0921 11:30:21.402]     W0921 11:14:01.261080    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.402]     Sep 21 11:14:06.268: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.403]     Sep 21 11:14:07.270: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.403]     Sep 21 11:14:08.273: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.403]     Sep 21 11:14:09.276: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.404]     Sep 21 11:14:10.278: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.404]     Sep 21 11:14:11.282: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 50 lines ...
I0921 11:30:21.414] STEP: Wait for 0 temp events generated 09/21/22 11:14:28.32
I0921 11:30:21.414] STEP: Wait for 0 total events generated 09/21/22 11:14:28.332
I0921 11:30:21.414] STEP: Make sure only 0 total events generated 09/21/22 11:14:28.341
I0921 11:30:21.414] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:33.341
I0921 11:30:21.415] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:33.344
I0921 11:30:21.415] STEP: should not generate events for too old log 09/21/22 11:14:38.344
I0921 11:30:21.415] STEP: Inject 3 logs: "temporary error" 09/21/22 11:14:38.344
I0921 11:30:21.415] STEP: Wait for 0 temp events generated 09/21/22 11:14:38.345
I0921 11:30:21.415] STEP: Wait for 0 total events generated 09/21/22 11:14:38.354
I0921 11:30:21.416] STEP: Make sure only 0 total events generated 09/21/22 11:14:38.362
I0921 11:30:21.416] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:43.362
I0921 11:30:21.416] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:43.365
I0921 11:30:21.416] STEP: should not change node condition for too old log 09/21/22 11:14:48.365
I0921 11:30:21.416] STEP: Inject 1 logs: "permanent error 1" 09/21/22 11:14:48.365
I0921 11:30:21.416] STEP: Wait for 0 temp events generated 09/21/22 11:14:48.365
I0921 11:30:21.417] STEP: Wait for 0 total events generated 09/21/22 11:14:48.374
I0921 11:30:21.417] STEP: Make sure only 0 total events generated 09/21/22 11:14:48.381
I0921 11:30:21.417] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:53.382
I0921 11:30:21.417] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:53.384
I0921 11:30:21.417] STEP: should generate event for old log within lookback duration 09/21/22 11:14:58.384
I0921 11:30:21.418] STEP: Inject 3 logs: "temporary error" 09/21/22 11:14:58.384
I0921 11:30:21.418] STEP: Wait for 3 temp events generated 09/21/22 11:14:58.385
I0921 11:30:21.418] STEP: Wait for 3 total events generated 09/21/22 11:14:59.403
I0921 11:30:21.418] STEP: Make sure only 3 total events generated 09/21/22 11:14:59.412
I0921 11:30:21.418] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:04.412
I0921 11:30:21.419] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:04.418
I0921 11:30:21.419] STEP: should change node condition for old log within lookback duration 09/21/22 11:15:09.418
I0921 11:30:21.419] STEP: Inject 1 logs: "permanent error 1" 09/21/22 11:15:09.418
I0921 11:30:21.419] STEP: Wait for 3 temp events generated 09/21/22 11:15:09.419
I0921 11:30:21.419] STEP: Wait for 4 total events generated 09/21/22 11:15:09.428
I0921 11:30:21.419] STEP: Make sure only 4 total events generated 09/21/22 11:15:10.454
I0921 11:30:21.420] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:15.454
I0921 11:30:21.420] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:15.457
I0921 11:30:21.420] STEP: should generate event for new log 09/21/22 11:15:20.457
I0921 11:30:21.420] STEP: Inject 3 logs: "temporary error" 09/21/22 11:15:20.457
I0921 11:30:21.420] STEP: Wait for 6 temp events generated 09/21/22 11:15:20.458
I0921 11:30:21.421] STEP: Wait for 7 total events generated 09/21/22 11:15:21.475
I0921 11:30:21.421] STEP: Make sure only 7 total events generated 09/21/22 11:15:21.484
I0921 11:30:21.421] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:26.484
I0921 11:30:21.421] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:26.487
I0921 11:30:21.421] STEP: should not update node condition with the same reason 09/21/22 11:15:31.487
I0921 11:30:21.422] STEP: Inject 1 logs: "permanent error 1different message" 09/21/22 11:15:31.488
I0921 11:30:21.422] STEP: Wait for 6 temp events generated 09/21/22 11:15:31.488
I0921 11:30:21.422] STEP: Wait for 7 total events generated 09/21/22 11:15:31.497
I0921 11:30:21.422] STEP: Make sure only 7 total events generated 09/21/22 11:15:31.503
I0921 11:30:21.422] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:36.503
I0921 11:30:21.422] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:36.506
I0921 11:30:21.423] STEP: should change node condition for new log 09/21/22 11:15:41.506
I0921 11:30:21.423] STEP: Inject 1 logs: "permanent error 2" 09/21/22 11:15:41.506
I0921 11:30:21.423] STEP: Wait for 6 temp events generated 09/21/22 11:15:41.507
I0921 11:30:21.423] STEP: Wait for 8 total events generated 09/21/22 11:15:41.515
I0921 11:30:21.423] STEP: Make sure only 8 total events generated 09/21/22 11:15:42.534
I0921 11:30:21.424] STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:47.534
I0921 11:30:21.424] STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:47.536
I0921 11:30:21.424] [AfterEach] SystemLogMonitor
... skipping 61 lines ...
I0921 11:30:21.435]     STEP: Wait for 0 temp events generated 09/21/22 11:14:28.32
I0921 11:30:21.436]     STEP: Wait for 0 total events generated 09/21/22 11:14:28.332
I0921 11:30:21.436]     STEP: Make sure only 0 total events generated 09/21/22 11:14:28.341
I0921 11:30:21.436]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:33.341
I0921 11:30:21.436]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:33.344
I0921 11:30:21.436]     STEP: should not generate events for too old log 09/21/22 11:14:38.344
I0921 11:30:21.437]     STEP: Inject 3 logs: "temporary error" 09/21/22 11:14:38.344
I0921 11:30:21.437]     STEP: Wait for 0 temp events generated 09/21/22 11:14:38.345
I0921 11:30:21.437]     STEP: Wait for 0 total events generated 09/21/22 11:14:38.354
I0921 11:30:21.437]     STEP: Make sure only 0 total events generated 09/21/22 11:14:38.362
I0921 11:30:21.437]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:43.362
I0921 11:30:21.437]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:43.365
I0921 11:30:21.438]     STEP: should not change node condition for too old log 09/21/22 11:14:48.365
I0921 11:30:21.438]     STEP: Inject 1 logs: "permanent error 1" 09/21/22 11:14:48.365
I0921 11:30:21.438]     STEP: Wait for 0 temp events generated 09/21/22 11:14:48.365
I0921 11:30:21.438]     STEP: Wait for 0 total events generated 09/21/22 11:14:48.374
I0921 11:30:21.438]     STEP: Make sure only 0 total events generated 09/21/22 11:14:48.381
I0921 11:30:21.439]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:14:53.382
I0921 11:30:21.439]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:14:53.384
I0921 11:30:21.439]     STEP: should generate event for old log within lookback duration 09/21/22 11:14:58.384
I0921 11:30:21.439]     STEP: Inject 3 logs: "temporary error" 09/21/22 11:14:58.384
I0921 11:30:21.439]     STEP: Wait for 3 temp events generated 09/21/22 11:14:58.385
I0921 11:30:21.439]     STEP: Wait for 3 total events generated 09/21/22 11:14:59.403
I0921 11:30:21.440]     STEP: Make sure only 3 total events generated 09/21/22 11:14:59.412
I0921 11:30:21.440]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:04.412
I0921 11:30:21.440]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:04.418
I0921 11:30:21.440]     STEP: should change node condition for old log within lookback duration 09/21/22 11:15:09.418
I0921 11:30:21.440]     STEP: Inject 1 logs: "permanent error 1" 09/21/22 11:15:09.418
I0921 11:30:21.441]     STEP: Wait for 3 temp events generated 09/21/22 11:15:09.419
I0921 11:30:21.441]     STEP: Wait for 4 total events generated 09/21/22 11:15:09.428
I0921 11:30:21.441]     STEP: Make sure only 4 total events generated 09/21/22 11:15:10.454
I0921 11:30:21.441]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:15.454
I0921 11:30:21.441]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:15.457
I0921 11:30:21.441]     STEP: should generate event for new log 09/21/22 11:15:20.457
I0921 11:30:21.442]     STEP: Inject 3 logs: "temporary error" 09/21/22 11:15:20.457
I0921 11:30:21.442]     STEP: Wait for 6 temp events generated 09/21/22 11:15:20.458
I0921 11:30:21.442]     STEP: Wait for 7 total events generated 09/21/22 11:15:21.475
I0921 11:30:21.442]     STEP: Make sure only 7 total events generated 09/21/22 11:15:21.484
I0921 11:30:21.442]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:26.484
I0921 11:30:21.442]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:26.487
I0921 11:30:21.443]     STEP: should not update node condition with the same reason 09/21/22 11:15:31.487
I0921 11:30:21.443]     STEP: Inject 1 logs: "permanent error 1different message" 09/21/22 11:15:31.488
I0921 11:30:21.443]     STEP: Wait for 6 temp events generated 09/21/22 11:15:31.488
I0921 11:30:21.443]     STEP: Wait for 7 total events generated 09/21/22 11:15:31.497
I0921 11:30:21.444]     STEP: Make sure only 7 total events generated 09/21/22 11:15:31.503
I0921 11:30:21.444]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:36.503
I0921 11:30:21.444]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:36.506
I0921 11:30:21.444]     STEP: should change node condition for new log 09/21/22 11:15:41.506
I0921 11:30:21.444]     STEP: Inject 1 logs: "permanent error 2" 09/21/22 11:15:41.506
I0921 11:30:21.445]     STEP: Wait for 6 temp events generated 09/21/22 11:15:41.507
I0921 11:30:21.445]     STEP: Wait for 8 total events generated 09/21/22 11:15:41.515
I0921 11:30:21.445]     STEP: Make sure only 8 total events generated 09/21/22 11:15:42.534
I0921 11:30:21.445]     STEP: Make sure node condition "TestCondition" is set 09/21/22 11:15:47.534
I0921 11:30:21.446]     STEP: Make sure node condition "TestCondition" is stable 09/21/22 11:15:47.536
I0921 11:30:21.446]     [AfterEach] SystemLogMonitor
... skipping 35 lines ...
I0921 11:30:21.453] 
I0921 11:30:21.453] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.453] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.454] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.454] 1 loaded units listed.
I0921 11:30:21.454] , kubelet-20220921T102832
I0921 11:30:21.454] W0921 11:15:52.880260    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:40208->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.454] STEP: Starting the kubelet 09/21/22 11:15:52.912
I0921 11:30:21.455] W0921 11:15:52.969134    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.455] Sep 21 11:15:57.999: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.455] Sep 21 11:15:59.001: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.455] Sep 21 11:16:00.004: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.456] Sep 21 11:16:01.007: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.456] Sep 21 11:16:02.009: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.456] Sep 21 11:16:03.012: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 110 lines ...
I0921 11:30:21.475] 
I0921 11:30:21.476] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.476] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.476] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.476] 1 loaded units listed.
I0921 11:30:21.476] , kubelet-20220921T102832
I0921 11:30:21.476] W0921 11:17:18.256946    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:37732->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.477] STEP: Starting the kubelet 09/21/22 11:17:18.27
I0921 11:30:21.477] W0921 11:17:18.328166    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.477] Sep 21 11:17:23.331: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.477] Sep 21 11:17:24.335: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.478] Sep 21 11:17:25.338: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.478] Sep 21 11:17:26.341: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.478] Sep 21 11:17:27.344: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.479] Sep 21 11:17:28.347: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 32 lines ...
I0921 11:30:21.483] 
I0921 11:30:21.483]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.483]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.484]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.484]     1 loaded units listed.
I0921 11:30:21.484]     , kubelet-20220921T102832
I0921 11:30:21.484]     W0921 11:15:52.880260    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:40208->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.484]     STEP: Starting the kubelet 09/21/22 11:15:52.912
I0921 11:30:21.485]     W0921 11:15:52.969134    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.485]     Sep 21 11:15:57.999: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.485]     Sep 21 11:15:59.001: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.485]     Sep 21 11:16:00.004: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.486]     Sep 21 11:16:01.007: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.486]     Sep 21 11:16:02.009: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.486]     Sep 21 11:16:03.012: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 110 lines ...
I0921 11:30:21.505] 
I0921 11:30:21.505]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.505]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.506]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.506]     1 loaded units listed.
I0921 11:30:21.506]     , kubelet-20220921T102832
I0921 11:30:21.506]     W0921 11:17:18.256946    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:37732->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.506]     STEP: Starting the kubelet 09/21/22 11:17:18.27
I0921 11:30:21.506]     W0921 11:17:18.328166    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.507]     Sep 21 11:17:23.331: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.507]     Sep 21 11:17:24.335: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.507]     Sep 21 11:17:25.338: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.507]     Sep 21 11:17:26.341: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.508]     Sep 21 11:17:27.344: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.508]     Sep 21 11:17:28.347: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 78 lines ...
I0921 11:30:21.520] 
I0921 11:30:21.521] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.521] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.521] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.521] 1 loaded units listed.
I0921 11:30:21.521] , kubelet-20220921T102832
I0921 11:30:21.521] W0921 11:17:33.643962    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:40892->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.522] STEP: Starting the kubelet 09/21/22 11:17:33.655
I0921 11:30:21.522] W0921 11:17:33.706753    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.522] Sep 21 11:17:38.711: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.522] Sep 21 11:17:39.714: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.523] Sep 21 11:17:40.717: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.523] Sep 21 11:17:41.720: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.523] Sep 21 11:17:42.723: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.524] Sep 21 11:17:43.731: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 77 lines ...
I0921 11:30:21.563] 
I0921 11:30:21.563] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.563] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.563] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.563] 1 loaded units listed.
I0921 11:30:21.563] , kubelet-20220921T102832
I0921 11:30:21.564] W0921 11:18:12.035027    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:55544->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.564] STEP: Starting the kubelet 09/21/22 11:18:12.048
I0921 11:30:21.564] W0921 11:18:12.106008    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.564] Sep 21 11:18:17.109: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.565] Sep 21 11:18:18.112: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.565] Sep 21 11:18:19.114: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.565] Sep 21 11:18:20.117: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.565] Sep 21 11:18:21.120: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.566] Sep 21 11:18:22.123: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 29 lines ...
I0921 11:30:21.571] 
I0921 11:30:21.571]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.571]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.571]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.571]     1 loaded units listed.
I0921 11:30:21.571]     , kubelet-20220921T102832
I0921 11:30:21.571]     W0921 11:17:33.643962    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:40892->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.572]     STEP: Starting the kubelet 09/21/22 11:17:33.655
I0921 11:30:21.572]     W0921 11:17:33.706753    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.572]     Sep 21 11:17:38.711: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.572]     Sep 21 11:17:39.714: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.573]     Sep 21 11:17:40.717: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.573]     Sep 21 11:17:41.720: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.573]     Sep 21 11:17:42.723: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.573]     Sep 21 11:17:43.731: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 77 lines ...
I0921 11:30:21.610] 
I0921 11:30:21.610]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:21.611]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:21.611]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:21.611]     1 loaded units listed.
I0921 11:30:21.611]     , kubelet-20220921T102832
I0921 11:30:21.611]     W0921 11:18:12.035027    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:55544->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:21.611]     STEP: Starting the kubelet 09/21/22 11:18:12.048
I0921 11:30:21.612]     W0921 11:18:12.106008    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:21.612]     Sep 21 11:18:17.109: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.612]     Sep 21 11:18:18.112: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.613]     Sep 21 11:18:19.114: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.613]     Sep 21 11:18:20.117: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.613]     Sep 21 11:18:21.120: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:21.614]     Sep 21 11:18:22.123: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 7875 lines ...
I0921 11:30:23.518] 
I0921 11:30:23.518] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:23.518] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:23.519] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:23.519] 1 loaded units listed.
I0921 11:30:23.519] , kubelet-20220921T102832
I0921 11:30:23.519] W0921 11:27:40.015926    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:43214->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:23.519] STEP: Starting the kubelet 09/21/22 11:27:40.025
I0921 11:30:23.520] W0921 11:27:40.078319    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:23.520] Sep 21 11:27:45.085: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.520] Sep 21 11:27:46.088: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.521] Sep 21 11:27:47.091: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.521] Sep 21 11:27:48.094: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.521] Sep 21 11:27:49.096: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.522] Sep 21 11:27:50.100: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 38 lines ...
I0921 11:30:23.530] 
I0921 11:30:23.530] LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:23.530] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:23.530] SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:23.530] 1 loaded units listed.
I0921 11:30:23.531] , kubelet-20220921T102832
I0921 11:30:23.531] W0921 11:29:16.811017    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41690->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:23.531] STEP: Starting the kubelet 09/21/22 11:29:16.821
I0921 11:30:23.531] W0921 11:29:16.875805    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:23.532] Sep 21 11:29:19.270: INFO: mirror pod "static-disk-hog-a25e8eb9-8643-41ef-9ac6-73bf6d96681d-n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d" is running
I0921 11:30:23.532] STEP: making sure that node no longer has DiskPressure 09/21/22 11:29:20.65
I0921 11:30:23.532] Sep 21 11:29:20.654: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.532] Sep 21 11:29:20.654: INFO: Unexpected error: 
I0921 11:30:23.532]     <*errors.errorString | 0xc001a92340>: {
I0921 11:30:23.533]         s: "there are currently no ready, schedulable nodes in the cluster",
I0921 11:30:23.533]     }
I0921 11:30:23.533] Sep 21 11:29:20.654: FAIL: there are currently no ready, schedulable nodes in the cluster
I0921 11:30:23.533] 
I0921 11:30:23.533] Full Stack Trace
I0921 11:30:23.533] k8s.io/kubernetes/test/e2e_node.getLocalNode(0xa0?)
I0921 11:30:23.533] 	test/e2e_node/util.go:254 +0x3f
I0921 11:30:23.534] k8s.io/kubernetes/test/e2e_node.hasNodeCondition(0x0?, {0x56095d7, 0xc})
I0921 11:30:23.534] 	test/e2e_node/eviction_test.go:783 +0x3e
... skipping 66 lines ...
I0921 11:30:23.549] 
I0921 11:30:23.549]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:23.549]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:23.549]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:23.550]     1 loaded units listed.
I0921 11:30:23.550]     , kubelet-20220921T102832
I0921 11:30:23.550]     W0921 11:27:40.015926    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:43214->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:23.550]     STEP: Starting the kubelet 09/21/22 11:27:40.025
I0921 11:30:23.551]     W0921 11:27:40.078319    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:23.551]     Sep 21 11:27:45.085: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.551]     Sep 21 11:27:46.088: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.552]     Sep 21 11:27:47.091: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.552]     Sep 21 11:27:48.094: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.552]     Sep 21 11:27:49.096: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.552]     Sep 21 11:27:50.100: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
... skipping 38 lines ...
I0921 11:30:23.561] 
I0921 11:30:23.561]     LOAD   = Reflects whether the unit definition was properly loaded.
I0921 11:30:23.561]     ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0921 11:30:23.561]     SUB    = The low-level unit activation state, values depend on unit type.
I0921 11:30:23.561]     1 loaded units listed.
I0921 11:30:23.561]     , kubelet-20220921T102832
I0921 11:30:23.562]     W0921 11:29:16.811017    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:41690->127.0.0.1:10248: read: connection reset by peer
I0921 11:30:23.562]     STEP: Starting the kubelet 09/21/22 11:29:16.821
I0921 11:30:23.562]     W0921 11:29:16.875805    2625 util.go:403] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0921 11:30:23.562]     Sep 21 11:29:19.270: INFO: mirror pod "static-disk-hog-a25e8eb9-8643-41ef-9ac6-73bf6d96681d-n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d" is running
I0921 11:30:23.563]     STEP: making sure that node no longer has DiskPressure 09/21/22 11:29:20.65
I0921 11:30:23.563]     Sep 21 11:29:20.654: INFO: Condition Ready of node n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d is false instead of true. Reason: KubeletNotReady, message: container runtime status check may not have completed yet
I0921 11:30:23.563]     Sep 21 11:29:20.654: INFO: Unexpected error: 
I0921 11:30:23.563]         <*errors.errorString | 0xc001a92340>: {
I0921 11:30:23.563]             s: "there are currently no ready, schedulable nodes in the cluster",
I0921 11:30:23.563]         }
I0921 11:30:23.563]     Sep 21 11:29:20.654: FAIL: there are currently no ready, schedulable nodes in the cluster
I0921 11:30:23.564] 
I0921 11:30:23.564]     Full Stack Trace
I0921 11:30:23.564]     k8s.io/kubernetes/test/e2e_node.getLocalNode(0xa0?)
I0921 11:30:23.564]     	test/e2e_node/util.go:254 +0x3f
I0921 11:30:23.564]     k8s.io/kubernetes/test/e2e_node.hasNodeCondition(0x0?, {0x56095d7, 0xc})
I0921 11:30:23.564]     	test/e2e_node/eviction_test.go:783 +0x3e
... skipping 946 lines ...
I0921 11:30:23.735]   test/e2e_node/e2e_node_suite_test.go:236
I0921 11:30:23.735] [SynchronizedAfterSuite] TOP-LEVEL
I0921 11:30:23.735]   test/e2e_node/e2e_node_suite_test.go:236
I0921 11:30:23.735] I0921 11:29:46.626021    2625 e2e_node_suite_test.go:239] Stopping node services...
I0921 11:30:23.735] I0921 11:29:46.626055    2625 server.go:257] Kill server "services"
I0921 11:30:23.736] I0921 11:29:46.626088    2625 server.go:294] Killing process 3140 (services) with -TERM
I0921 11:30:23.736] E0921 11:29:46.775387    2625 services.go:93] Failed to stop services: error stopping "services": waitid: no child processes
I0921 11:30:23.736] I0921 11:29:46.775404    2625 server.go:257] Kill server "kubelet"
I0921 11:30:23.736] I0921 11:29:46.789872    2625 services.go:149] Fetching log files...
I0921 11:30:23.736] I0921 11:29:46.790344    2625 services.go:158] Get log file "docker.log" with journalctl command [-u docker].
I0921 11:30:23.736] I0921 11:29:46.806628    2625 services.go:158] Get log file "containerd.log" with journalctl command [-u containerd].
I0921 11:30:23.737] I0921 11:29:46.819483    2625 services.go:158] Get log file "containerd-installation.log" with journalctl command [-u containerd-installation].
I0921 11:30:23.737] I0921 11:29:46.835844    2625 services.go:158] Get log file "crio.log" with journalctl command [-u crio].
I0921 11:30:23.737] I0921 11:29:57.643474    2625 services.go:158] Get log file "kern.log" with journalctl command [-k].
I0921 11:30:23.737] I0921 11:29:57.761394    2625 services.go:158] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0921 11:30:23.738] E0921 11:29:57.794963    2625 services.go:161] failed to get "cloud-init.log" from journald: Failed to add filter for units: No data available
I0921 11:30:23.738] , exit status 1
I0921 11:30:23.738] I0921 11:29:57.794988    2625 e2e_node_suite_test.go:244] Tests Finished
I0921 11:30:23.738] ------------------------------
I0921 11:30:23.738] [SynchronizedAfterSuite] PASSED [11.170 seconds]
I0921 11:30:23.738] [SynchronizedAfterSuite] 
I0921 11:30:23.739] test/e2e_node/e2e_node_suite_test.go:236
... skipping 3 lines ...
I0921 11:30:23.739]       test/e2e_node/e2e_node_suite_test.go:236
I0921 11:30:23.739]     [SynchronizedAfterSuite] TOP-LEVEL
I0921 11:30:23.739]       test/e2e_node/e2e_node_suite_test.go:236
I0921 11:30:23.740]     I0921 11:29:46.626021    2625 e2e_node_suite_test.go:239] Stopping node services...
I0921 11:30:23.740]     I0921 11:29:46.626055    2625 server.go:257] Kill server "services"
I0921 11:30:23.740]     I0921 11:29:46.626088    2625 server.go:294] Killing process 3140 (services) with -TERM
I0921 11:30:23.740]     E0921 11:29:46.775387    2625 services.go:93] Failed to stop services: error stopping "services": waitid: no child processes
I0921 11:30:23.740]     I0921 11:29:46.775404    2625 server.go:257] Kill server "kubelet"
I0921 11:30:23.741]     I0921 11:29:46.789872    2625 services.go:149] Fetching log files...
I0921 11:30:23.741]     I0921 11:29:46.790344    2625 services.go:158] Get log file "docker.log" with journalctl command [-u docker].
I0921 11:30:23.741]     I0921 11:29:46.806628    2625 services.go:158] Get log file "containerd.log" with journalctl command [-u containerd].
I0921 11:30:23.741]     I0921 11:29:46.819483    2625 services.go:158] Get log file "containerd-installation.log" with journalctl command [-u containerd-installation].
I0921 11:30:23.742]     I0921 11:29:46.835844    2625 services.go:158] Get log file "crio.log" with journalctl command [-u crio].
I0921 11:30:23.742]     I0921 11:29:57.643474    2625 services.go:158] Get log file "kern.log" with journalctl command [-k].
I0921 11:30:23.742]     I0921 11:29:57.761394    2625 services.go:158] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0921 11:30:23.742]     E0921 11:29:57.794963    2625 services.go:161] failed to get "cloud-init.log" from journald: Failed to add filter for units: No data available
I0921 11:30:23.742]     , exit status 1
I0921 11:30:23.743]     I0921 11:29:57.794988    2625 e2e_node_suite_test.go:244] Tests Finished
I0921 11:30:23.743]   << End Captured GinkgoWriter Output
I0921 11:30:23.743] ------------------------------
I0921 11:30:23.743] [ReportAfterSuite] Kubernetes e2e JUnit report
I0921 11:30:23.743] test/e2e/framework/test_context.go:522
... skipping 13 lines ...
I0921 11:30:23.745] 
I0921 11:30:23.745] Summarizing 1 Failure:
I0921 11:30:23.745]   [INTERRUPTED] [sig-node] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod  [It] should not be evicted upon DiskPressure
I0921 11:30:23.746]   test/e2e_node/system_node_critical_test.go:84
I0921 11:30:23.746] 
I0921 11:30:23.746] Ran 35 of 376 Specs in 3671.212 seconds
I0921 11:30:23.746] FAIL! - Interrupted by Timeout -- 34 Passed | 1 Failed | 0 Pending | 341 Skipped
I0921 11:30:23.746] --- FAIL: TestE2eNode (3671.24s)
I0921 11:30:23.746] FAIL
I0921 11:30:23.746] 
I0921 11:30:23.747] Ginkgo ran 1 suite in 1h1m11.363761529s
I0921 11:30:23.747] 
I0921 11:30:23.747] Test Suite Failed
I0921 11:30:23.747] 
I0921 11:30:23.747] Failure Finished Test Suite on Host n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d
I0921 11:30:23.748] command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@34.127.113.214 -- sudo sh -c 'cd /tmp/node-e2e-20220921T102832 && timeout -k 30s 25200.000000s ./ginkgo --nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --v 4 --node-name=n1-standard-2-fedora-coreos-36-20220906-3-0-gcp-x86-64-5af5130d --report-dir=/tmp/node-e2e-20220921T102832/results --report-prefix=fedora --image-description="fedora-coreos-36-20220906-3-0-gcp-x86-64" --feature-gates=LocalStorageCapacityIsolation=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"'] failed with error: exit status 1
I0921 11:30:23.748] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0921 11:30:23.749] <                              FINISH TEST                               <
I0921 11:30:23.749] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0921 11:30:23.749] 
I0921 11:30:23.749] Failure: 1 errors encountered.
I0921 11:30:23.812] Checking for custom logdump instances, if any
... skipping 11 lines ...
W0921 11:30:24.007] exit status 1
W0921 11:30:24.008] 2022/09/21 11:30:23 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-079 --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--feature-gates=LocalStorageCapacityIsolation=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=7h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1-serial.yaml' finished in 1h13m38.277441123s
W0921 11:30:24.008] 2022/09/21 11:30:23 e2e.go:574: Dumping logs locally to: /workspace/_artifacts
W0921 11:30:24.009] 2022/09/21 11:30:23 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0921 11:30:24.009] Trying to find master named 'bootstrap-e2e-master'
W0921 11:30:24.009] Looking for address 'bootstrap-e2e-master-ip'
W0921 11:30:25.072] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0921 11:30:25.072]  - The resource 'projects/k8s-infra-e2e-boskos-079/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found
W0921 11:30:25.072] 
W0921 11:30:25.288] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0921 11:30:25.388] Master not detected. Is the cluster up?
I0921 11:30:25.388] Dumping logs from nodes locally to '/workspace/_artifacts'
I0921 11:30:25.389] Detecting nodes in the cluster
... skipping 4 lines ...
W0921 11:30:31.869] NODE_NAMES=
W0921 11:30:31.871] 2022/09/21 11:30:31 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 8.061987292s
W0921 11:30:31.872] 2022/09/21 11:30:31 node.go:53: Noop - Node Down()
W0921 11:30:31.872] 2022/09/21 11:30:31 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0921 11:30:31.872] 2022/09/21 11:30:31 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0921 11:30:32.297] 2022/09/21 11:30:32 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 425.624461ms
W0921 11:30:32.324] 2022/09/21 11:30:32 main.go:331: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup -vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-079 --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--feature-gates=LocalStorageCapacityIsolation=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=7h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1-serial.yaml: exit status 1]
W0921 11:30:32.325] Traceback (most recent call last):
W0921 11:30:32.325]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module>
W0921 11:30:32.326]     main(parse_args())
W0921 11:30:32.326]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main
W0921 11:30:32.326]     mode.start(runner_args)
W0921 11:30:32.326]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0921 11:30:32.326]     check_env(env, self.command, *args)
W0921 11:30:32.327]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0921 11:30:32.327]     subprocess.check_call(cmd, env=env)
W0921 11:30:32.327]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0921 11:30:32.327]     raise CalledProcessError(retcode, cmd)
W0921 11:30:32.328] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--feature-gates=LocalStorageCapacityIsolation=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\\"name\\": \\"crio.log\\", \\"journalctl\\": [\\"-u\\", \\"crio\\"]}"', '--node-tests=true', '--test_args=--nodes=1 --focus="\\[Serial\\]" --skip="\\[Flaky\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\]"', '--timeout=420m', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/crio/latest/image-config-cgrpv1-serial.yaml')' returned non-zero exit status 1
E0921 11:30:32.333] Command failed
I0921 11:30:32.334] process 535 exited with code 1 after 73.8m
E0921 11:30:32.334] FAIL: pull-kubernetes-node-kubelet-serial-crio-cgroupv1
I0921 11:30:32.334] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0921 11:30:33.098] Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
I0921 11:30:33.241] process 56344 exited with code 0 after 0.0m
I0921 11:30:33.241] Call:  gcloud config get-value account
I0921 11:30:33.920] process 56358 exited with code 0 after 0.0m
I0921 11:30:33.921] Will upload results to gs://kubernetes-jenkins/pr-logs using prow-build@k8s-infra-prow-build.iam.gserviceaccount.com
I0921 11:30:33.921] Upload result and artifacts...
I0921 11:30:33.921] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/112625/pull-kubernetes-node-kubelet-serial-crio-cgroupv1/1572529867969269760
I0921 11:30:33.922] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/112625/pull-kubernetes-node-kubelet-serial-crio-cgroupv1/1572529867969269760/artifacts
W0921 11:30:35.139] CommandException: One or more URLs matched no objects.
E0921 11:30:35.381] Command failed
I0921 11:30:35.381] process 56372 exited with code 1 after 0.0m
W0921 11:30:35.381] Remote dir gs://kubernetes-jenkins/pr-logs/pull/112625/pull-kubernetes-node-kubelet-serial-crio-cgroupv1/1572529867969269760/artifacts not exist yet
I0921 11:30:35.382] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/112625/pull-kubernetes-node-kubelet-serial-crio-cgroupv1/1572529867969269760/artifacts
I0921 11:30:42.418] process 56512 exited with code 0 after 0.1m
I0921 11:30:42.419] Call:  git rev-parse HEAD
I0921 11:30:42.422] process 57084 exited with code 0 after 0.0m
... skipping 20 lines ...