Recent runs || View in Spyglass
PR | BenTheElder: entrypoint cleanup + non-systemd-host fix |
Result | FAILURE |
Tests | 15 failed / 695 succeeded |
Started | |
Elapsed | 39m23s |
Revision | 4175f8236e841b2da0d2b746dd47d9ea4fb962c9 |
Refs |
2767 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sCronJob\sshould\sbe\sable\sto\sschedule\safter\smore\sthan\s100\smissed\sschedule$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189 May 19 19:16:19.386: Failed to wait for active jobs in CronJob concurrent in namespace cronjob-5243 Unexpected error: <*errors.StatusError | 0xc002e77c20>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:202from junit_23.xml
[BeforeEach] [sig-apps] CronJob /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:16:04.189: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to schedule after more than 100 missed schedule /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:189 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: Ensuring one job is running May 19 19:16:19.386: FAIL: Failed to wait for active jobs in CronJob concurrent in namespace cronjob-5243 Unexpected error: <*errors.StatusError | 0xc002e77c20>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func1.5() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:202 +0x4b1 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000327980) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000327980) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000327980, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-apps] CronJob /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "cronjob-5243". �[1mSTEP�[0m: Found 0 events. May 19 19:16:24.218: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.218: INFO: May 19 19:16:24.317: INFO: Logging node info for node kind-control-plane May 19 19:16:24.415: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.416: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.521: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.624: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.624: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.624: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.624: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.624: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.624: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.624: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.624: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.624: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.624: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.624: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.624: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.624: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.624: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.624: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.624: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.624: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.624: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:25.002: INFO: Latency metrics for node kind-control-plane May 19 19:16:25.002: INFO: Logging node info for node kind-worker May 19 19:16:25.015: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.016: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.087: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.173: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.173: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.173: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.173: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.173: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.173: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.173: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.173: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.173: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.173: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.173: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.173: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.173: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.173: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.173: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.173: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.173: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.173: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.173: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.173: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.173: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.173: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.173: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container agnhost-container ready: false, restart count 0 May 19 19:16:25.173: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.173: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.173: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.173: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.173: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.173: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.174: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.174: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.174: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.174: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.174: INFO: Container mock ready: true, restart count 0 May 19 19:16:26.404: INFO: Latency metrics for node kind-worker May 19 19:16:26.404: INFO: Logging node info for node kind-worker2 May 19 19:16:26.435: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.435: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.463: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.529: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.529: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.529: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.529: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.529: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.529: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.529: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.529: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.529: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.529: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.529: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.529: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.529: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.529: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.529: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.530: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.530: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.530: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.530: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.530: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.530: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.530: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.530: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.530: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.530: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.530: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.530: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.530: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.530: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.530: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.530: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.530: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container c ready: true, restart count 0 May 19 19:16:26.530: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.530: INFO: Container webserver ready: false, restart count 0 May 19 19:16:27.366: INFO: Latency metrics for node kind-worker2 May 19 19:16:27.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-5243" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sdeployment\sreaping\sshould\scascade\sto\sits\sreplica\ssets\sand\spods$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:95 May 19 19:16:19.412: Unexpected error: <*errors.errorString | 0xc003b95760>: { s: "error waiting for deployment \"test-new-deployment\" status to match expectation: etcdserver: request timed out", } error waiting for deployment "test-new-deployment" status to match expectation: etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:739from junit_16.xml
[BeforeEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:15:58.295: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment reaping should cascade to its replica sets and pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:95 May 19 19:15:58.330: INFO: Creating simple deployment test-new-deployment May 19 19:15:58.346: INFO: deployment "test-new-deployment" doesn't have the required revision set May 19 19:16:00.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 19:16:02.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 19:16:04.360: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63788584558, loc:(*time.Location)(0xa0b0fa0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} May 19 19:16:19.412: FAIL: Unexpected error: <*errors.errorString | 0xc003b95760>: { s: "error waiting for deployment \"test-new-deployment\" status to match expectation: etcdserver: request timed out", } error waiting for deployment "test-new-deployment" status to match expectation: etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.testDeleteDeployment(0xc000e34f20) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:739 +0x665 k8s.io/kubernetes/test/e2e/apps.glob..func4.3() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:96 +0x2a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000a1c180) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000a1c180) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000a1c180, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 May 19 19:16:24.027: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-1574 c9c8dc2a-3226-436f-bb32-be989d7d9782 46067 1 2022-05-19 19:15:58 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1 kubectl.kubernetes.io/last-applied-configuration:should-not-copy-to-replica-set test:should-copy-to-replica-set] [] [] [{e2e.test Update apps/v1 2022-05-19 19:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:test":{}},"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-05-19 19:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003bf20c8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-05-19 19:15:58 +0000 UTC,LastTransitionTime:2022-05-19 19:15:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-new-deployment-847dcfb7fb" is progressing.,LastUpdateTime:2022-05-19 19:15:58 +0000 UTC,LastTransitionTime:2022-05-19 19:15:58 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 19 19:16:24.212: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-1574 490c1b9e-a67a-4199-ad89-67791d512309 46066 1 2022-05-19 19:15:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1 test:should-copy-to-replica-set] [{apps/v1 Deployment test-new-deployment c9c8dc2a-3226-436f-bb32-be989d7d9782 0xc003bf24d7 0xc003bf24d8}] [] [{kube-controller-manager Update apps/v1 2022-05-19 19:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{},"f:test":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c9c8dc2a-3226-436f-bb32-be989d7d9782\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-05-19 19:15:58 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003bf2568 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 19 19:16:24.312: INFO: Pod "test-new-deployment-847dcfb7fb-c4njf" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-c4njf test-new-deployment-847dcfb7fb- deployment-1574 031541d8-9dd5-4657-b212-4d5a01a6f065 46410 0 2022-05-19 19:15:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 490c1b9e-a67a-4199-ad89-67791d512309 0xc005116ea7 0xc005116ea8}] [] [{kube-controller-manager Update v1 2022-05-19 19:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"490c1b9e-a67a-4199-ad89-67791d512309\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-05-19 19:16:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nslwd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nslwd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-19 19:15:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-19 19:15:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-19 19:15:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-19 19:15:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2022-05-19 19:15:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "deployment-1574". �[1mSTEP�[0m: Found 6 events. May 19 19:16:24.417: INFO: At 2022-05-19 19:15:58 +0000 UTC - event for test-new-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-new-deployment-847dcfb7fb to 1 May 19 19:16:24.417: INFO: At 2022-05-19 19:15:58 +0000 UTC - event for test-new-deployment-847dcfb7fb: {replicaset-controller } SuccessfulCreate: Created pod: test-new-deployment-847dcfb7fb-c4njf May 19 19:16:24.417: INFO: At 2022-05-19 19:15:58 +0000 UTC - event for test-new-deployment-847dcfb7fb-c4njf: {default-scheduler } Scheduled: Successfully assigned deployment-1574/test-new-deployment-847dcfb7fb-c4njf to kind-worker May 19 19:16:24.417: INFO: At 2022-05-19 19:16:00 +0000 UTC - event for test-new-deployment-847dcfb7fb-c4njf: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 19 19:16:24.417: INFO: At 2022-05-19 19:16:00 +0000 UTC - event for test-new-deployment-847dcfb7fb-c4njf: {kubelet kind-worker} Created: Created container httpd May 19 19:16:24.417: INFO: At 2022-05-19 19:16:00 +0000 UTC - event for test-new-deployment-847dcfb7fb-c4njf: {kubelet kind-worker} Started: Started container httpd May 19 19:16:24.509: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.509: INFO: test-new-deployment-847dcfb7fb-c4njf kind-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:58 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:58 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:58 +0000 UTC }] May 19 19:16:24.509: INFO: May 19 19:16:24.591: INFO: Logging node info for node kind-control-plane May 19 19:16:24.626: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.627: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.645: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.795: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.795: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.795: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.795: INFO: Container kube-controller-manager ready: false, restart count 0 May 19 19:16:24.795: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.795: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.795: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.795: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.795: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.795: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.795: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.795: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.795: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.796: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.796: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.796: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.796: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.796: INFO: Container kube-scheduler ready: false, restart count 0 May 19 19:16:25.163: INFO: Latency metrics for node kind-control-plane May 19 19:16:25.163: INFO: Logging node info for node kind-worker May 19 19:16:25.219: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.219: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.264: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.428: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.428: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.428: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.428: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.429: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.429: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.429: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.429: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.429: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.429: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.429: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.429: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.429: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.429: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.429: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.429: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.429: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.429: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.429: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.429: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.429: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.429: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.429: INFO: Container mock ready: true, restart count 0 May 19 19:16:25.429: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.429: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.429: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.429: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.429: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.429: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.429: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.429: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.429: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.429: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.429: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.429: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.350: INFO: Latency metrics for node kind-worker May 19 19:16:26.350: INFO: Logging node info for node kind-worker2 May 19 19:16:26.363: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.363: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.398: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.440: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.440: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.440: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.440: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.440: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.440: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.440: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.440: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.440: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.440: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.440: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.440: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.440: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.440: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.440: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container c ready: true, restart count 0 May 19 19:16:26.440: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.440: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.440: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.440: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.440: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.440: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.440: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.440: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.440: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.440: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.440: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.440: INFO: Container webserver ready: true, restart count 0 May 19 19:16:27.200: INFO: Latency metrics for node kind-worker2 May 19 19:16:27.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-1574" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sadopt\smatching\sorphans\sand\srelease\snon\-matching\spods$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:167 May 19 19:16:19.412: Unexpected error: <*errors.errorString | 0xc002300a00>: { s: "failed to get pod \"ss-0\": etcdserver: request timed out", } failed to get pod "ss-0": etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:130from junit_22.xml
{"msg":"PASSED [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service","total":-1,"completed":26,"skipped":256,"failed":0} [BeforeEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:15:14.882: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-5373 [It] should adopt matching orphans and release non-matching pods /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:167 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-5373 May 19 19:15:14.994: INFO: Default storage class: "standard" �[1mSTEP�[0m: Saturating stateful set ss May 19 19:15:15.005: INFO: Waiting for stateful pod at index 0 to enter Running May 19 19:15:15.030: INFO: Found 0 stateful pods, waiting for 1 May 19 19:15:25.047: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false May 19 19:15:35.050: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false May 19 19:15:45.053: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false May 19 19:15:55.061: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Pending - Ready=false May 19 19:16:05.034: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 19 19:16:05.034: INFO: Resuming stateful pod at index 0 May 19 19:16:05.037: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:38671 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-5373 exec ss-0 -- /bin/sh -x -c dd if=/dev/zero of=/data/statefulset-continue bs=1 count=1 conv=fsync' May 19 19:16:05.232: INFO: stderr: "+ dd 'if=/dev/zero' 'of=/data/statefulset-continue' 'bs=1' 'count=1' 'conv=fsync'\n1+0 records in\n1+0 records out\n" May 19 19:16:05.232: INFO: stdout: "" May 19 19:16:05.232: INFO: Resumed pod ss-0 �[1mSTEP�[0m: Checking that stateful set pods are created with ControllerRef �[1mSTEP�[0m: Orphaning one of the stateful set's pods May 19 19:16:19.412: FAIL: Unexpected error: <*errors.errorString | 0xc002300a00>: { s: "failed to get pod \"ss-0\": etcdserver: request timed out", } failed to get pod "ss-0": etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).Update(0xc003480588, 0xc0028fa9dc, 0x4, 0x72e39e8) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:130 +0xcd k8s.io/kubernetes/test/e2e/apps.glob..func9.2.4() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:194 +0x78a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0001ae780) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0001ae780) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc0001ae780, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 May 19 19:16:24.027: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:38671 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-5373 describe po ss-0' May 19 19:16:24.420: INFO: stderr: "" May 19 19:16:24.420: INFO: stdout: "Name: ss-0\nNamespace: statefulset-5373\nPriority: 0\nNode: kind-worker2/172.18.0.2\nStart Time: Thu, 19 May 2022 19:15:29 +0000\nLabels: baz=blah\n controller-revision-hash=ss-696cb77d7d\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: <none>\nStatus: Running\nIP: 10.244.2.15\nIPs:\n IP: 10.244.2.15\nControlled By: StatefulSet/ss\nContainers:\n webserver:\n Container ID: containerd://872e402c59fd323dc9fbb1157f972ed975ac3e8e32d5694b27f3a49220b73a36\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 19 May 2022 19:15:32 +0000\n Ready: False\n Restart Count: 0\n Readiness: exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /data/ from datadir (rw)\n /home from home (rw)\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d9s5p (ro)\nConditions:\n Type Status\n Initialized True \n Ready False \n ContainersReady False \n PodScheduled True \nVolumes:\n datadir:\n Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n ClaimName: datadir-ss-0\n ReadOnly: false\n home:\n Type: HostPath (bare host directory volume)\n Path: /tmp/home\n HostPathType: \n kube-api-access-d9s5p:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 55s default-scheduler Successfully assigned statefulset-5373/ss-0 to kind-worker2\n Normal Pulled 52s kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" already present on machine\n Normal Created 52s kubelet Created container webserver\n Normal Started 52s kubelet Started container webserver\n Warning Unhealthy 31s (x22 over 51s) kubelet Readiness probe failed:\n" May 19 19:16:24.421: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-5373 Priority: 0 Node: kind-worker2/172.18.0.2 Start Time: Thu, 19 May 2022 19:15:29 +0000 Labels: baz=blah controller-revision-hash=ss-696cb77d7d foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: <none> Status: Running IP: 10.244.2.15 IPs: IP: 10.244.2.15 Controlled By: StatefulSet/ss Containers: webserver: Container ID: containerd://872e402c59fd323dc9fbb1157f972ed975ac3e8e32d5694b27f3a49220b73a36 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: <none> Host Port: <none> State: Running Started: Thu, 19 May 2022 19:15:32 +0000 Ready: False Restart Count: 0 Readiness: exec [test -f /data/statefulset-continue] delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /data/ from datadir (rw) /home from home (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d9s5p (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: datadir: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: datadir-ss-0 ReadOnly: false home: Type: HostPath (bare host directory volume) Path: /tmp/home HostPathType: kube-api-access-d9s5p: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 55s default-scheduler Successfully assigned statefulset-5373/ss-0 to kind-worker2 Normal Pulled 52s kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine Normal Created 52s kubelet Created container webserver Normal Started 52s kubelet Started container webserver Warning Unhealthy 31s (x22 over 51s) kubelet Readiness probe failed: May 19 19:16:24.421: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:38671 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-5373 logs ss-0 --tail=100' May 19 19:16:24.656: INFO: stderr: "" May 19 19:16:24.656: INFO: stdout: "[Thu May 19 19:15:32.470654 2022] [mpm_event:notice] [pid 1:tid 140402184104808] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu May 19 19:15:32.470752 2022] [core:notice] [pid 1:tid 140402184104808] AH00094: Command line: 'httpd -D FOREGROUND'\n" May 19 19:16:24.656: INFO: Last 100 log lines of ss-0: [Thu May 19 19:15:32.470654 2022] [mpm_event:notice] [pid 1:tid 140402184104808] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Thu May 19 19:15:32.470752 2022] [core:notice] [pid 1:tid 140402184104808] AH00094: Command line: 'httpd -D FOREGROUND' May 19 19:16:24.656: INFO: Deleting all statefulset in ns statefulset-5373 May 19 19:16:24.760: INFO: Scaling statefulset ss to 0 May 19 19:17:15.024: INFO: Waiting for statefulset status.replicas updated to 0 May 19 19:17:15.068: INFO: Deleting statefulset ss May 19 19:17:15.186: INFO: Deleting pvc: datadir-ss-0 with volume pvc-8382e58a-2169-490e-ab66-0bf18b4c0e71 May 19 19:17:15.323: INFO: Still waiting for pvs of statefulset to disappear: pvc-8382e58a-2169-490e-ab66-0bf18b4c0e71: {Phase:Bound Message: Reason:} [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "statefulset-5373". �[1mSTEP�[0m: Found 12 events. May 19 19:17:25.331: INFO: At 2022-05-19 19:15:15 +0000 UTC - event for datadir-ss-0: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding May 19 19:17:25.331: INFO: At 2022-05-19 19:15:15 +0000 UTC - event for datadir-ss-0: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator May 19 19:17:25.331: INFO: At 2022-05-19 19:15:15 +0000 UTC - event for datadir-ss-0: {rancher.io/local-path_local-path-provisioner-6c9449b9dd-rq246_bf3d0679-b5b5-4856-b7d9-6a34c5be50ee } Provisioning: External provisioner is provisioning volume for claim "statefulset-5373/datadir-ss-0" May 19 19:17:25.331: INFO: At 2022-05-19 19:15:15 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Claim datadir-ss-0 Pod ss-0 in StatefulSet ss success May 19 19:17:25.331: INFO: At 2022-05-19 19:15:15 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful May 19 19:17:25.331: INFO: At 2022-05-19 19:15:28 +0000 UTC - event for datadir-ss-0: {rancher.io/local-path_local-path-provisioner-6c9449b9dd-rq246_bf3d0679-b5b5-4856-b7d9-6a34c5be50ee } ProvisioningSucceeded: Successfully provisioned volume pvc-8382e58a-2169-490e-ab66-0bf18b4c0e71 May 19 19:17:25.331: INFO: At 2022-05-19 19:15:29 +0000 UTC - event for ss-0: {default-scheduler } Scheduled: Successfully assigned statefulset-5373/ss-0 to kind-worker2 May 19 19:17:25.331: INFO: At 2022-05-19 19:15:32 +0000 UTC - event for ss-0: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 19 19:17:25.331: INFO: At 2022-05-19 19:15:32 +0000 UTC - event for ss-0: {kubelet kind-worker2} Created: Created container webserver May 19 19:17:25.331: INFO: At 2022-05-19 19:15:32 +0000 UTC - event for ss-0: {kubelet kind-worker2} Started: Started container webserver May 19 19:17:25.331: INFO: At 2022-05-19 19:15:33 +0000 UTC - event for ss-0: {kubelet kind-worker2} Unhealthy: Readiness probe failed: May 19 19:17:25.331: INFO: At 2022-05-19 19:16:56 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful May 19 19:17:25.333: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:17:25.333: INFO: May 19 19:17:25.336: INFO: Logging node info for node kind-control-plane May 19 19:17:25.345: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:17:25.345: INFO: Logging kubelet events for node kind-control-plane May 19 19:17:25.349: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:17:25.366: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.366: INFO: Container kube-scheduler ready: true, restart count 1 May 19 19:17:25.366: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.366: INFO: Container coredns ready: true, restart count 0 May 19 19:17:25.366: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.366: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:17:25.366: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.366: INFO: Container coredns ready: true, restart count 0 May 19 19:17:25.366: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.366: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:17:25.366: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.366: INFO: Container etcd ready: true, restart count 0 May 19 19:17:25.366: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.366: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:17:25.366: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.366: INFO: Container kube-controller-manager ready: true, restart count 1 May 19 19:17:25.366: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.366: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:17:25.413: INFO: Latency metrics for node kind-control-plane May 19 19:17:25.413: INFO: Logging node info for node kind-worker May 19 19:17:25.430: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 48233 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1303":"csi-mock-csi-mock-volumes-1303"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:17:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:17:16 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:17:16 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:17:16 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:17:16 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:23799982,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:17:25.430: INFO: Logging kubelet events for node kind-worker May 19 19:17:25.443: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:17:25.469: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.469: INFO: Container startup-script ready: true, restart count 0 May 19 19:17:25.469: INFO: pod-secrets-bfa7a073-84a9-4ef9-9f6a-e37d2d8ff45f started at <nil> (0+0 container statuses recorded) May 19 19:17:25.469: INFO: deployment-585449566-scnfx started at <nil> (0+0 container statuses recorded) May 19 19:17:25.469: INFO: deployment-55649fd747-jqjxb started at <nil> (0+0 container statuses recorded) May 19 19:17:25.469: INFO: csi-mockplugin-0 started at 2022-05-19 19:17:13 +0000 UTC (0+4 container statuses recorded) May 19 19:17:25.469: INFO: Container busybox ready: false, restart count 0 May 19 19:17:25.469: INFO: Container csi-provisioner ready: false, restart count 0 May 19 19:17:25.469: INFO: Container driver-registrar ready: false, restart count 0 May 19 19:17:25.469: INFO: Container mock ready: false, restart count 0 May 19 19:17:25.469: INFO: csi-mockplugin-0 started at 2022-05-19 19:16:59 +0000 UTC (0+3 container statuses recorded) May 19 19:17:25.469: INFO: Container csi-provisioner ready: false, restart count 0 May 19 19:17:25.469: INFO: Container driver-registrar ready: false, restart count 0 May 19 19:17:25.469: INFO: Container mock ready: false, restart count 0 May 19 19:17:25.469: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.469: INFO: Container webserver ready: true, restart count 0 May 19 19:17:25.469: INFO: hostexec-kind-worker-kdbgq started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.469: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:25.469: INFO: busybox-host-aliasesf0ade533-7aeb-4fbd-a076-c7babc5cfc89 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.469: INFO: Container busybox-host-aliasesf0ade533-7aeb-4fbd-a076-c7babc5cfc89 ready: true, restart count 0 May 19 19:17:25.469: INFO: simpletest.rc-rz7xq started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.469: INFO: Container nginx ready: true, restart count 0 May 19 19:17:25.469: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.469: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:17:25.469: INFO: netserver-0 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.469: INFO: Container webserver ready: false, restart count 0 May 19 19:17:25.469: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.469: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:17:25.470: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:17:25.470: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container webserver ready: true, restart count 0 May 19 19:17:25.470: INFO: e2e-net-exec started at 2022-05-19 19:16:58 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container e2e-net-exec ready: true, restart count 0 May 19 19:17:25.470: INFO: pod-subpath-test-preprovisionedpv-scw5 started at 2022-05-19 19:17:12 +0000 UTC (0+2 container statuses recorded) May 19 19:17:25.470: INFO: Container test-container-subpath-preprovisionedpv-scw5 ready: false, restart count 0 May 19 19:17:25.470: INFO: Container test-container-volume-preprovisionedpv-scw5 ready: false, restart count 0 May 19 19:17:25.470: INFO: deployment-585449566-qq9tl started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container nginx ready: false, restart count 0 May 19 19:17:25.470: INFO: hostexec-kind-worker-vdph4 started at <nil> (0+0 container statuses recorded) May 19 19:17:25.470: INFO: pod-handle-http-request started at <nil> (0+0 container statuses recorded) May 19 19:17:25.470: INFO: externalsvc-qf6d6 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container externalsvc ready: true, restart count 0 May 19 19:17:25.470: INFO: hostexec-kind-worker-tx8t9 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:25.470: INFO: up-down-3-9g97q started at 2022-05-19 19:16:56 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container up-down-3 ready: true, restart count 0 May 19 19:17:25.470: INFO: netserver-0 started at 2022-05-19 19:17:15 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container webserver ready: false, restart count 0 May 19 19:17:25.470: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:17:25.470: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container webserver ready: true, restart count 0 May 19 19:17:25.470: INFO: exec-volume-test-preprovisionedpv-kknh started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.470: INFO: Container exec-container-preprovisionedpv-kknh ready: false, restart count 0 May 19 19:17:25.470: INFO: pvc-volume-tester-rj6mn started at <nil> (0+0 container statuses recorded) May 19 19:17:25.605: INFO: Latency metrics for node kind-worker May 19 19:17:25.605: INFO: Logging node info for node kind-worker2 May 19 19:17:25.609: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 48252 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-3458":"csi-mock-csi-mock-volumes-3458"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:17:25.609: INFO: Logging kubelet events for node kind-worker2 May 19 19:17:25.613: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:17:25.639: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container webserver ready: true, restart count 0 May 19 19:17:25.639: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:17:25.639: INFO: Container csi-attacher ready: false, restart count 0 May 19 19:17:25.639: INFO: Container csi-provisioner ready: false, restart count 0 May 19 19:17:25.639: INFO: Container csi-resizer ready: false, restart count 0 May 19 19:17:25.639: INFO: Container csi-snapshotter ready: false, restart count 0 May 19 19:17:25.639: INFO: Container hostpath ready: false, restart count 0 May 19 19:17:25.639: INFO: Container liveness-probe ready: false, restart count 0 May 19 19:17:25.639: INFO: Container node-driver-registrar ready: false, restart count 0 May 19 19:17:25.639: INFO: pvc-volume-tester-mf576 started at <nil> (0+0 container statuses recorded) May 19 19:17:25.639: INFO: e2e-net-server started at <nil> (0+0 container statuses recorded) May 19 19:17:25.639: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container donothing ready: false, restart count 0 May 19 19:17:25.639: INFO: verify-service-up-host-exec-pod started at <nil> (0+0 container statuses recorded) May 19 19:17:25.639: INFO: up-down-3-c564n started at 2022-05-19 19:16:56 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container up-down-3 ready: true, restart count 0 May 19 19:17:25.639: INFO: pod-b119a076-b487-49d6-a12d-03fe0abd6738 started at <nil> (0+0 container statuses recorded) May 19 19:17:25.639: INFO: netserver-1 started at <nil> (0+0 container statuses recorded) May 19 19:17:25.639: INFO: execpod99569 started at <nil> (0+0 container statuses recorded) May 19 19:17:25.639: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container webserver ready: true, restart count 0 May 19 19:17:25.639: INFO: csi-mockplugin-0 started at 2022-05-19 19:16:59 +0000 UTC (0+3 container statuses recorded) May 19 19:17:25.639: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:17:25.639: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:17:25.639: INFO: Container mock ready: true, restart count 0 May 19 19:17:25.639: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container busybox ready: true, restart count 0 May 19 19:17:25.639: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:17:25.639: INFO: hostexec-kind-worker2-28nz4 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:25.639: INFO: deployment-585449566-wvn6x started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container nginx ready: false, restart count 0 May 19 19:17:25.639: INFO: to-be-attached-pod started at 2022-05-19 19:17:17 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container container1 ready: false, restart count 0 May 19 19:17:25.639: INFO: helper-pod-create-pvc-16b9e577-e8a1-4133-81c6-a1b0c945ba44 started at <nil> (0+0 container statuses recorded) May 19 19:17:25.639: INFO: hostexec-kind-worker2-dfshp started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:25.639: INFO: netserver-1 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container webserver ready: false, restart count 0 May 19 19:17:25.639: INFO: up-down-3-5xznd started at 2022-05-19 19:16:56 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container up-down-3 ready: true, restart count 0 May 19 19:17:25.639: INFO: pod-prestop-hook-d6d0551c-4cf0-4204-a542-aec0d4af226c started at 2022-05-19 19:16:56 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container nginx ready: true, restart count 0 May 19 19:17:25.639: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:17:25.639: INFO: hostexec-kind-worker2-6dxf4 started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:25.639: INFO: sample-webhook-deployment-78988fc6cd-wg5p8 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container sample-webhook ready: true, restart count 0 May 19 19:17:25.639: INFO: pod-handle-http-request started at <nil> (0+0 container statuses recorded) May 19 19:17:25.639: INFO: simpletest.rc-qxghd started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container nginx ready: true, restart count 0 May 19 19:17:25.639: INFO: externalsvc-4xd9s started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container externalsvc ready: true, restart count 0 May 19 19:17:25.639: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:17:25.639: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:17:25.818: INFO: Latency metrics for node kind-worker2 May 19 19:17:25.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-5373" for this suite.
Find ss-0 mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 19 19:16:19.383: Unexpected error: <*errors.StatusError | 0xc001e9d720>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68from junit_09.xml
[BeforeEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:15:40.834: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-6908 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a new StatefulSet May 19 19:15:40.935: INFO: Found 0 stateful pods, waiting for 3 May 19 19:15:50.941: INFO: Found 1 stateful pods, waiting for 3 May 19 19:16:00.943: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 19 19:16:00.943: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 19 19:16:00.943: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 19 19:16:19.383: FAIL: Unexpected error: <*errors.StatusError | 0xc001e9d720>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x79bc3e8, 0xc002162160, 0xc000039900, 0x42) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x21e k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1(0xc00201bec0, 0xc00201bec0, 0xc00201bec0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x67 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x79286a8, 0xc000124010, 0xc00146ae44, 0x1, 0x2) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x79286a8, 0xc000124010, 0xc002af0040, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x79286a8, 0xc000124010, 0xc0032ffae8, 0xc002af0040, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:657 +0x159 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x79286a8, 0xc000124010, 0xc0032ffa01, 0xc0032ffae8, 0xc002af0040, 0x686a460, 0xc002af0040) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x79286a8, 0xc000124010, 0x2540be400, 0x8bb2c97000, 0xc002af0040, 0x6beee60, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2540be400, 0x8bb2c97000, 0xc00205c540, 0x2, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x79bc3e8, 0xc002162160, 0x300000003, 0xc000039900) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0x9d k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.8() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:327 +0x2a7 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000103200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000103200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000103200, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 E0519 19:16:19.385292 86057 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"May 19 19:16:19.383: Unexpected error:\n <*errors.StatusError | 0xc001e9d720>: {\n ErrStatus: {\n TypeMeta: {Kind: \"\", APIVersion: \"\"},\n ListMeta: {\n SelfLink: \"\",\n ResourceVersion: \"\",\n Continue: \"\",\n RemainingItemCount: nil,\n },\n Status: \"Failure\",\n Message: \"etcdserver: request timed out\",\n Reason: \"\",\n Details: nil,\n Code: 500,\n },\n }\n etcdserver: request timed out\noccurred", Filename:"/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x79bc3e8, 0xc002162160, 0xc000039900, 0x42)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x21e\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1(0xc00201bec0, 0xc00201bec0, 0xc00201bec0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x67\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x79286a8, 0xc000124010, 0xc00146ae44, 0x1, 0x2)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x79286a8, 0xc000124010, 0xc002af0040, 0x0, 0x0, 0x0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x79286a8, 0xc000124010, 0xc0032ffae8, 0xc002af0040, 0x0, 0x0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:657 +0x159\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x79286a8, 0xc000124010, 0xc0032ffa01, 0xc0032ffae8, 0xc002af0040, 0x686a460, 0xc002af0040)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x79286a8, 0xc000124010, 0x2540be400, 0x8bb2c97000, 0xc002af0040, 0x6beee60, 0x1)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2540be400, 0x8bb2c97000, 0xc00205c540, 0x2, 0x0)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x79bc3e8, 0xc002162160, 0x300000003, 0xc000039900)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0x9d\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func9.2.8()\n\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:327 +0x2a7\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000103200)\n\t_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000103200)\n\t_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b\ntesting.tRunner(0xc000103200, 0x72e36d8)\n\t/usr/local/go/src/testing/testing.go:1203 +0xe5\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1248 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 123 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x6c03da0, 0xc00430acc0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86 panic(0x6c03da0, 0xc00430acc0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc001a9d680, 0x224, 0x8985320, 0x71, 0x44, 0xc001fdd900, 0xc60) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x6330600, 0x77df3e0) /usr/local/go/src/runtime/panic.go:965 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc001a9d680, 0x224, 0xc00146a378, 0x1, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc001a9d680, 0x224, 0xc00146a460, 0x1, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Fail(0xc001a9d440, 0x20f, 0xc0033f4e20, 0x1, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:62 +0x1ea k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc00146a5e8, 0x7914db8, 0xa0e1f88, 0x0, 0x0, 0x0, 0x0, 0xc001e9d720) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:79 +0x216 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc00146a5e8, 0x7914db8, 0xa0e1f88, 0x0, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0xc7 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x1, 0x78b1d40, 0xc001e9d720, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xe7 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList(0x79bc3e8, 0xc002162160, 0xc000039900, 0x42) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x21e k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1(0xc00201bec0, 0xc00201bec0, 0xc00201bec0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x67 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1(0x79286a8, 0xc000124010, 0xc00146ae44, 0x1, 0x2) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:217 +0x26 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext(0x79286a8, 0xc000124010, 0xc002af0040, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:230 +0x7f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x79286a8, 0xc000124010, 0xc0032ffae8, 0xc002af0040, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:657 +0x159 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x79286a8, 0xc000124010, 0xc0032ffa01, 0xc0032ffae8, 0xc002af0040, 0x686a460, 0xc002af0040) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext(0x79286a8, 0xc000124010, 0x2540be400, 0x8bb2c97000, 0xc002af0040, 0x6beee60, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:523 +0x66 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2540be400, 0x8bb2c97000, 0xc00205c540, 0x2, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:509 +0x6f k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning(0x79bc3e8, 0xc002162160, 0x300000003, 0xc000039900) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0x9d k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.8() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:327 +0x2a7 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000182240, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000182240, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc000d9bf00, 0x78adcc0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0024f8960, 0x0, 0x78adcc0, 0xc00016a800) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0024f8960, 0x78adcc0, 0xc00016a800) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00246f900, 0xc0024f8960, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00246f900, 0x1) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x147 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00246f900, 0xc002cc1278) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000214070, 0x7f2adc30dc18, 0xc000103200, 0x70ae1f5, 0x14, 0xc000d8c6c0, 0x3, 0x3, 0x79634d8, 0xc00016a800, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x546 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x78b37c0, 0xc000103200, 0x70ae1f5, 0x14, 0xc0009b5c40, 0x3, 0x4, 0x4) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x218 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x78b37c0, 0xc000103200, 0x70ae1f5, 0x14, 0xc0008f7be0, 0x2, 0x2, 0x25) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000103200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000103200) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000103200, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 May 19 19:16:24.044: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:38671 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-6908 describe po ss2-0' May 19 19:16:24.423: INFO: stderr: "" May 19 19:16:24.423: INFO: stdout: "Name: ss2-0\nNamespace: statefulset-6908\nPriority: 0\nNode: kind-worker/172.18.0.3\nStart Time: Thu, 19 May 2022 19:15:40 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-677d6db895\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: <none>\nStatus: Running\nIP: 10.244.1.2\nIPs:\n IP: 10.244.1.2\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://a36dcec24f2acc46b9ead6a73fe8113caf833691eba2db8a4acf21621cbf5e8c\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 19 May 2022 19:15:42 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r79td (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-r79td:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 44s default-scheduler Successfully assigned statefulset-6908/ss2-0 to kind-worker\n Normal Pulled 42s kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" already present on machine\n Normal Created 42s kubelet Created container webserver\n Normal Started 42s kubelet Started container webserver\n" May 19 19:16:24.423: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: statefulset-6908 Priority: 0 Node: kind-worker/172.18.0.3 Start Time: Thu, 19 May 2022 19:15:40 +0000 Labels: baz=blah controller-revision-hash=ss2-677d6db895 foo=bar statefulset.kubernetes.io/pod-name=ss2-0 Annotations: <none> Status: Running IP: 10.244.1.2 IPs: IP: 10.244.1.2 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: containerd://a36dcec24f2acc46b9ead6a73fe8113caf833691eba2db8a4acf21621cbf5e8c Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: <none> Host Port: <none> State: Running Started: Thu, 19 May 2022 19:15:42 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r79td (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-r79td: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 44s default-scheduler Successfully assigned statefulset-6908/ss2-0 to kind-worker Normal Pulled 42s kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine Normal Created 42s kubelet Created container webserver Normal Started 42s kubelet Started container webserver May 19 19:16:24.423: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:38671 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-6908 logs ss2-0 --tail=100' May 19 19:16:24.658: INFO: stderr: "" May 19 19:16:24.658: INFO: stdout: "[Thu May 19 19:15:42.525701 2022] [mpm_event:notice] [pid 1:tid 140043244903272] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu May 19 19:15:42.525795 2022] [core:notice] [pid 1:tid 140043244903272] AH00094: Command line: 'httpd -D FOREGROUND'\n10.244.1.1 - - [19/May/2022:19:15:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:15:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" May 19 19:16:24.658: INFO: Last 100 log lines of ss2-0: [Thu May 19 19:15:42.525701 2022] [mpm_event:notice] [pid 1:tid 140043244903272] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Thu May 19 19:15:42.525795 2022] [core:notice] [pid 1:tid 140043244903272] AH00094: Command line: 'httpd -D FOREGROUND' 10.244.1.1 - - [19/May/2022:19:15:42 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:42 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:43 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:44 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:45 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:46 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:47 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:48 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:49 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:50 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:51 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:52 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:53 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:55 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:56 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:57 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:58 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:15:59 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:00 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:01 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:02 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:03 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:04 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:05 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:06 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:07 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:08 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:09 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:10 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:11 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:12 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:13 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:14 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:15 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:16 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:17 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:18 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:19 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:20 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:21 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:22 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:23 +0000] "GET /index.html HTTP/1.1" 200 45 May 19 19:16:24.658: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:38671 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-6908 describe po ss2-1' May 19 19:16:25.033: INFO: stderr: "" May 19 19:16:25.033: INFO: stdout: "Name: ss2-1\nNamespace: statefulset-6908\nPriority: 0\nNode: kind-worker2/172.18.0.2\nStart Time: Thu, 19 May 2022 19:15:51 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-677d6db895\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations: <none>\nStatus: Running\nIP: 10.244.2.22\nIPs:\n IP: 10.244.2.22\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://e9429243ed176848da89ad08d28bc25f9ce41dc92780c8c04c8c55a12b7a4f66\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Thu, 19 May 2022 19:15:53 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bkjf2 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bkjf2:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 34s default-scheduler Successfully assigned statefulset-6908/ss2-1 to kind-worker2\n Normal Pulled 32s kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" already present on machine\n Normal Created 32s kubelet Created container webserver\n Normal Started 32s kubelet Started container webserver\n" May 19 19:16:25.033: INFO: Output of kubectl describe ss2-1: Name: ss2-1 Namespace: statefulset-6908 Priority: 0 Node: kind-worker2/172.18.0.2 Start Time: Thu, 19 May 2022 19:15:51 +0000 Labels: baz=blah controller-revision-hash=ss2-677d6db895 foo=bar statefulset.kubernetes.io/pod-name=ss2-1 Annotations: <none> Status: Running IP: 10.244.2.22 IPs: IP: 10.244.2.22 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: containerd://e9429243ed176848da89ad08d28bc25f9ce41dc92780c8c04c8c55a12b7a4f66 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: <none> Host Port: <none> State: Running Started: Thu, 19 May 2022 19:15:53 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bkjf2 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-bkjf2: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 34s default-scheduler Successfully assigned statefulset-6908/ss2-1 to kind-worker2 Normal Pulled 32s kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine Normal Created 32s kubelet Created container webserver Normal Started 32s kubelet Started container webserver May 19 19:16:25.033: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:38671 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-6908 logs ss2-1 --tail=100' May 19 19:16:25.245: INFO: stderr: "" May 19 19:16:25.245: INFO: stdout: "[Thu May 19 19:15:53.589190 2022] [mpm_event:notice] [pid 1:tid 140705426963304] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu May 19 19:15:53.589268 2022] [core:notice] [pid 1:tid 140705426963304] AH00094: Command line: 'httpd -D FOREGROUND'\n10.244.2.1 - - [19/May/2022:19:15:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:15:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:15:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:15:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:15:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:15:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:15:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.2.1 - - [19/May/2022:19:16:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" May 19 19:16:25.246: INFO: Last 100 log lines of ss2-1: [Thu May 19 19:15:53.589190 2022] [mpm_event:notice] [pid 1:tid 140705426963304] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Thu May 19 19:15:53.589268 2022] [core:notice] [pid 1:tid 140705426963304] AH00094: Command line: 'httpd -D FOREGROUND' 10.244.2.1 - - [19/May/2022:19:15:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:15:54 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:15:55 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:15:56 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:15:57 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:15:58 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:15:59 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:00 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:01 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:02 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:03 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:04 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:05 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:06 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:07 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:08 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:09 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:10 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:11 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:12 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:13 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:14 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:15 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:16 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:17 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:18 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:19 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:20 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:21 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:22 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:23 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.2.1 - - [19/May/2022:19:16:24 +0000] "GET /index.html HTTP/1.1" 200 45 May 19 19:16:25.246: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:38671 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-6908 describe po ss2-2' May 19 19:16:25.447: INFO: stderr: "" May 19 19:16:25.447: INFO: stdout: "Name: ss2-2\nNamespace: statefulset-6908\nPriority: 0\nNode: kind-worker/\nLabels: baz=blah\n controller-revision-hash=ss2-677d6db895\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-2\nAnnotations: <none>\nStatus: Pending\nIP: \nIPs: <none>\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Port: <none>\n Host Port: <none>\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bjrx8 (ro)\nConditions:\n Type Status\n PodScheduled True \nVolumes:\n kube-api-access-bjrx8:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 26s default-scheduler Successfully assigned statefulset-6908/ss2-2 to kind-worker\n Normal Pulled 24s kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" already present on machine\n Normal Created 24s kubelet Created container webserver\n Normal Started 24s kubelet Started container webserver\n" May 19 19:16:25.447: INFO: Output of kubectl describe ss2-2: Name: ss2-2 Namespace: statefulset-6908 Priority: 0 Node: kind-worker/ Labels: baz=blah controller-revision-hash=ss2-677d6db895 foo=bar statefulset.kubernetes.io/pod-name=ss2-2 Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: StatefulSet/ss2 Containers: webserver: Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Port: <none> Host Port: <none> Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bjrx8 (ro) Conditions: Type Status PodScheduled True Volumes: kube-api-access-bjrx8: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 26s default-scheduler Successfully assigned statefulset-6908/ss2-2 to kind-worker Normal Pulled 24s kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine Normal Created 24s kubelet Created container webserver Normal Started 24s kubelet Started container webserver May 19 19:16:25.447: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/_output/bin/kubectl --server=https://127.0.0.1:38671 --kubeconfig=/root/.kube/kind-test-config --namespace=statefulset-6908 logs ss2-2 --tail=100' May 19 19:16:25.666: INFO: stderr: "" May 19 19:16:25.666: INFO: stdout: "[Thu May 19 19:16:01.552068 2022] [mpm_event:notice] [pid 1:tid 140591592917864] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu May 19 19:16:01.552140 2022] [core:notice] [pid 1:tid 140591592917864] AH00094: Command line: 'httpd -D FOREGROUND'\n10.244.1.1 - - [19/May/2022:19:16:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n10.244.1.1 - - [19/May/2022:19:16:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" May 19 19:16:25.666: INFO: Last 100 log lines of ss2-2: [Thu May 19 19:16:01.552068 2022] [mpm_event:notice] [pid 1:tid 140591592917864] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Thu May 19 19:16:01.552140 2022] [core:notice] [pid 1:tid 140591592917864] AH00094: Command line: 'httpd -D FOREGROUND' 10.244.1.1 - - [19/May/2022:19:16:01 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:02 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:03 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:04 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:05 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:06 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:07 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:08 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:09 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:10 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:11 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:12 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:13 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:14 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:15 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:16 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:17 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:18 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:19 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:20 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:21 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:22 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:23 +0000] "GET /index.html HTTP/1.1" 200 45 10.244.1.1 - - [19/May/2022:19:16:24 +0000] "GET /index.html HTTP/1.1" 200 45 May 19 19:16:25.666: INFO: Deleting all statefulset in ns statefulset-6908 May 19 19:16:25.688: INFO: Scaling statefulset ss2 to 0 May 19 19:17:45.796: INFO: Waiting for statefulset status.replicas updated to 0 May 19 19:17:45.800: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "statefulset-6908". �[1mSTEP�[0m: Found 23 events. May 19 19:17:45.852: INFO: At 2022-05-19 19:15:40 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful May 19 19:17:45.852: INFO: At 2022-05-19 19:15:40 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-6908/ss2-0 to kind-worker May 19 19:17:45.853: INFO: At 2022-05-19 19:15:42 +0000 UTC - event for ss2-0: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 19 19:17:45.853: INFO: At 2022-05-19 19:15:42 +0000 UTC - event for ss2-0: {kubelet kind-worker} Created: Created container webserver May 19 19:17:45.853: INFO: At 2022-05-19 19:15:42 +0000 UTC - event for ss2-0: {kubelet kind-worker} Started: Started container webserver May 19 19:17:45.853: INFO: At 2022-05-19 19:15:51 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful May 19 19:17:45.853: INFO: At 2022-05-19 19:15:51 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-6908/ss2-1 to kind-worker2 May 19 19:17:45.853: INFO: At 2022-05-19 19:15:53 +0000 UTC - event for ss2-1: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 19 19:17:45.853: INFO: At 2022-05-19 19:15:53 +0000 UTC - event for ss2-1: {kubelet kind-worker2} Created: Created container webserver May 19 19:17:45.853: INFO: At 2022-05-19 19:15:53 +0000 UTC - event for ss2-1: {kubelet kind-worker2} Started: Started container webserver May 19 19:17:45.853: INFO: At 2022-05-19 19:15:59 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful May 19 19:17:45.853: INFO: At 2022-05-19 19:15:59 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-6908/ss2-2 to kind-worker May 19 19:17:45.853: INFO: At 2022-05-19 19:16:01 +0000 UTC - event for ss2-2: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine May 19 19:17:45.853: INFO: At 2022-05-19 19:16:01 +0000 UTC - event for ss2-2: {kubelet kind-worker} Created: Created container webserver May 19 19:17:45.853: INFO: At 2022-05-19 19:16:01 +0000 UTC - event for ss2-2: {kubelet kind-worker} Started: Started container webserver May 19 19:17:45.853: INFO: At 2022-05-19 19:16:56 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-2 in StatefulSet ss2 successful May 19 19:17:45.853: INFO: At 2022-05-19 19:16:56 +0000 UTC - event for ss2-2: {kubelet kind-worker} Killing: Stopping container webserver May 19 19:17:45.853: INFO: At 2022-05-19 19:16:56 +0000 UTC - event for ss2-2: {kubelet kind-worker} Unhealthy: Readiness probe failed: Get "http://10.244.1.12:80/index.html": dial tcp 10.244.1.12:80: connect: connection refused May 19 19:17:45.853: INFO: At 2022-05-19 19:17:10 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-1 in StatefulSet ss2 successful May 19 19:17:45.853: INFO: At 2022-05-19 19:17:10 +0000 UTC - event for ss2-1: {kubelet kind-worker2} Killing: Stopping container webserver May 19 19:17:45.853: INFO: At 2022-05-19 19:17:24 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful May 19 19:17:45.853: INFO: At 2022-05-19 19:17:24 +0000 UTC - event for ss2-0: {kubelet kind-worker} Unhealthy: Readiness probe failed: Get "http://10.244.1.2:80/index.html": dial tcp 10.244.1.2:80: connect: connection refused May 19 19:17:45.853: INFO: At 2022-05-19 19:17:24 +0000 UTC - event for ss2-0: {kubelet kind-worker} Killing: Stopping container webserver May 19 19:17:45.855: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:17:45.855: INFO: May 19 19:17:45.870: INFO: Logging node info for node kind-control-plane May 19 19:17:45.915: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:17:45.915: INFO: Logging kubelet events for node kind-control-plane May 19 19:17:45.935: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:17:45.959: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:45.959: INFO: Container kube-controller-manager ready: true, restart count 1 May 19 19:17:45.959: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:17:45.959: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:17:45.959: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:17:45.959: INFO: Container coredns ready: true, restart count 0 May 19 19:17:45.959: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:17:45.959: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:17:45.959: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:45.959: INFO: Container etcd ready: true, restart count 0 May 19 19:17:45.959: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:45.959: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:17:45.959: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:17:45.959: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:17:45.959: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:45.959: INFO: Container kube-scheduler ready: true, restart count 1 May 19 19:17:45.959: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:17:45.959: INFO: Container coredns ready: true, restart count 0 May 19 19:17:46.035: INFO: Latency metrics for node kind-control-plane May 19 19:17:46.035: INFO: Logging node info for node kind-worker May 19 19:17:46.048: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 49157 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-1303":"csi-mock-csi-mock-volumes-1303","csi-mock-csi-mock-volumes-3250":"csi-mock-csi-mock-volumes-3250","csi-mock-csi-mock-volumes-6308":"csi-mock-csi-mock-volumes-6308"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:17:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2022-05-19 19:17:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:17:16 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:17:16 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:17:16 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:17:16 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:160595fccf5ad4e41cc0a7acf56027802bf1a2310e704f6505baf0f88746e277 k8s.gcr.io/build-image/debian-iptables:buster-v1.6.7],SizeBytes:23799982,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-3250^4,DevicePath:,},},Config:nil,},} May 19 19:17:46.048: INFO: Logging kubelet events for node kind-worker May 19 19:17:46.061: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:17:46.076: INFO: csi-mockplugin-0 started at 2022-05-19 19:16:59 +0000 UTC (0+3 container statuses recorded) May 19 19:17:46.076: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:17:46.076: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:17:46.076: INFO: Container mock ready: true, restart count 0 May 19 19:17:46.076: INFO: deployment-585449566-scnfx started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container nginx ready: false, restart count 0 May 19 19:17:46.076: INFO: deployment-55649fd747-jqjxb started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container nginx ready: false, restart count 0 May 19 19:17:46.076: INFO: csi-mockplugin-0 started at 2022-05-19 19:17:13 +0000 UTC (0+4 container statuses recorded) May 19 19:17:46.076: INFO: Container busybox ready: true, restart count 0 May 19 19:17:46.076: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:17:46.076: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:17:46.076: INFO: Container mock ready: true, restart count 0 May 19 19:17:46.076: INFO: pod-with-prestop-exec-hook started at 2022-05-19 19:17:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container pod-with-prestop-exec-hook ready: true, restart count 0 May 19 19:17:46.076: INFO: pod-subpath-test-inlinevolume-6rrc started at <nil> (0+0 container statuses recorded) May 19 19:17:46.076: INFO: hostexec-kind-worker-sxv5m started at <nil> (0+0 container statuses recorded) May 19 19:17:46.076: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:17:46.076: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:17:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container csi-attacher ready: false, restart count 0 May 19 19:17:46.076: INFO: pod-subpath-test-inlinevolume-j997 started at 2022-05-19 19:17:31 +0000 UTC (0+2 container statuses recorded) May 19 19:17:46.076: INFO: Container test-container-subpath-inlinevolume-j997 ready: false, restart count 0 May 19 19:17:46.076: INFO: Container test-container-volume-inlinevolume-j997 ready: false, restart count 0 May 19 19:17:46.076: INFO: e2e-net-client started at <nil> (0+0 container statuses recorded) May 19 19:17:46.076: INFO: busybox-host-aliasesf0ade533-7aeb-4fbd-a076-c7babc5cfc89 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container busybox-host-aliasesf0ade533-7aeb-4fbd-a076-c7babc5cfc89 ready: true, restart count 0 May 19 19:17:46.076: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:17:46.076: INFO: pod-with-poststart-http-hook started at 2022-05-19 19:17:28 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container pod-with-poststart-http-hook ready: true, restart count 0 May 19 19:17:46.076: INFO: webserver-deployment-847dcfb7fb-nr8gk started at <nil> (0+0 container statuses recorded) May 19 19:17:46.076: INFO: netserver-0 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container webserver ready: true, restart count 0 May 19 19:17:46.076: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:17:46.076: INFO: webserver-deployment-847dcfb7fb-nr9ss started at <nil> (0+0 container statuses recorded) May 19 19:17:46.076: INFO: webserver-deployment-847dcfb7fb-nrrbg started at <nil> (0+0 container statuses recorded) May 19 19:17:46.076: INFO: pod-handle-http-request started at 2022-05-19 19:17:09 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:46.076: INFO: webserver-deployment-847dcfb7fb-hzvtm started at <nil> (0+0 container statuses recorded) May 19 19:17:46.076: INFO: e2e-net-exec started at 2022-05-19 19:16:58 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container e2e-net-exec ready: true, restart count 0 May 19 19:17:46.076: INFO: pod-subpath-test-preprovisionedpv-scw5 started at 2022-05-19 19:17:12 +0000 UTC (0+2 container statuses recorded) May 19 19:17:46.076: INFO: Container test-container-subpath-preprovisionedpv-scw5 ready: false, restart count 0 May 19 19:17:46.076: INFO: Container test-container-volume-preprovisionedpv-scw5 ready: true, restart count 0 May 19 19:17:46.076: INFO: deployment-585449566-qq9tl started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container nginx ready: false, restart count 0 May 19 19:17:46.076: INFO: hostexec-kind-worker-vdph4 started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:46.076: INFO: up-down-3-9g97q started at 2022-05-19 19:16:56 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container up-down-3 ready: true, restart count 0 May 19 19:17:46.076: INFO: pvc-volume-tester-sfc7c started at <nil> (0+0 container statuses recorded) May 19 19:17:46.076: INFO: externalsvc-qf6d6 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container externalsvc ready: true, restart count 0 May 19 19:17:46.076: INFO: hostexec-kind-worker-tx8t9 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:46.076: INFO: webserver-deployment-847dcfb7fb-knrkz started at <nil> (0+0 container statuses recorded) May 19 19:17:46.076: INFO: netserver-0 started at 2022-05-19 19:17:15 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container webserver ready: false, restart count 0 May 19 19:17:46.076: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.076: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:17:46.076: INFO: csi-mockplugin-0 started at 2022-05-19 19:17:29 +0000 UTC (0+3 container statuses recorded) May 19 19:17:46.076: INFO: Container csi-provisioner ready: false, restart count 0 May 19 19:17:46.076: INFO: Container driver-registrar ready: false, restart count 0 May 19 19:17:46.076: INFO: Container mock ready: false, restart count 0 May 19 19:17:46.076: INFO: deployment-shared-map-item-removal-55649fd747-fq56v started at <nil> (0+0 container statuses recorded) May 19 19:17:46.077: INFO: pvc-volume-tester-rwsf5 started at <nil> (0+0 container statuses recorded) May 19 19:17:46.077: INFO: pod-secrets-bfa7a073-84a9-4ef9-9f6a-e37d2d8ff45f started at 2022-05-19 19:17:21 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.077: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:17:46.297: INFO: Latency metrics for node kind-worker May 19 19:17:46.297: INFO: Logging node info for node kind-worker2 May 19 19:17:46.304: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 48252 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-3458":"csi-mock-csi-mock-volumes-3458"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:17:46.305: INFO: Logging kubelet events for node kind-worker2 May 19 19:17:46.310: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:17:46.332: INFO: pvc-volume-tester-mf576 started at 2022-05-19 19:17:16 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container volume-tester ready: false, restart count 0 May 19 19:17:46.332: INFO: e2e-net-server started at 2022-05-19 19:17:22 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container e2e-net-server ready: true, restart count 0 May 19 19:17:46.332: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container donothing ready: false, restart count 0 May 19 19:17:46.332: INFO: deployment-shared-map-item-removal-55649fd747-zfz8t started at 2022-05-19 19:17:25 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container nginx ready: false, restart count 0 May 19 19:17:46.332: INFO: test-container-pod started at <nil> (0+0 container statuses recorded) May 19 19:17:46.332: INFO: local-injector started at 2022-05-19 19:17:26 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container local-injector ready: true, restart count 0 May 19 19:17:46.332: INFO: up-down-3-c564n started at 2022-05-19 19:16:56 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container up-down-3 ready: true, restart count 0 May 19 19:17:46.332: INFO: pod-b119a076-b487-49d6-a12d-03fe0abd6738 started at 2022-05-19 19:17:13 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container write-pod ready: true, restart count 0 May 19 19:17:46.332: INFO: verify-service-up-host-exec-pod started at 2022-05-19 19:17:14 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:46.332: INFO: startup-c58995bc-fecd-4222-96d3-fea8de280e19 started at 2022-05-19 19:17:26 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container busybox ready: false, restart count 0 May 19 19:17:46.332: INFO: pod-5e698d90-90a0-42ce-b608-000806ca5dc2 started at 2022-05-19 19:17:29 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container write-pod ready: true, restart count 0 May 19 19:17:46.332: INFO: pod-0 started at 2022-05-19 19:17:35 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container donothing ready: true, restart count 0 May 19 19:17:46.332: INFO: pod-0 started at <nil> (0+0 container statuses recorded) May 19 19:17:46.332: INFO: execpod99569 started at 2022-05-19 19:17:13 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:46.332: INFO: hostexec-kind-worker2-lmqnw started at 2022-05-19 19:17:32 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:46.332: INFO: netserver-1 started at 2022-05-19 19:17:15 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container webserver ready: false, restart count 0 May 19 19:17:46.332: INFO: deployment-shared-map-item-removal-55649fd747-6792d started at 2022-05-19 19:17:25 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container nginx ready: false, restart count 0 May 19 19:17:46.332: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container busybox ready: true, restart count 0 May 19 19:17:46.332: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:17:46.332: INFO: hostexec-kind-worker2-28nz4 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:46.332: INFO: csi-mockplugin-0 started at 2022-05-19 19:16:59 +0000 UTC (0+3 container statuses recorded) May 19 19:17:46.332: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:17:46.332: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:17:46.332: INFO: Container mock ready: true, restart count 0 May 19 19:17:46.332: INFO: webserver-deployment-847dcfb7fb-rsllf started at <nil> (0+0 container statuses recorded) May 19 19:17:46.332: INFO: webserver-deployment-847dcfb7fb-8zxhr started at <nil> (0+0 container statuses recorded) May 19 19:17:46.332: INFO: webserver-deployment-847dcfb7fb-gr66m started at <nil> (0+0 container statuses recorded) May 19 19:17:46.332: INFO: netserver-1 started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container webserver ready: true, restart count 0 May 19 19:17:46.332: INFO: deployment-585449566-wvn6x started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container nginx ready: false, restart count 0 May 19 19:17:46.332: INFO: to-be-attached-pod started at 2022-05-19 19:17:17 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container container1 ready: true, restart count 0 May 19 19:17:46.332: INFO: pod-prestop-hook-d6d0551c-4cf0-4204-a542-aec0d4af226c started at 2022-05-19 19:16:56 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container nginx ready: true, restart count 0 May 19 19:17:46.332: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.332: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:17:46.332: INFO: pod-2 started at <nil> (0+0 container statuses recorded) May 19 19:17:46.332: INFO: webserver-deployment-847dcfb7fb-4tqhj started at <nil> (0+0 container statuses recorded) May 19 19:17:46.333: INFO: up-down-3-5xznd started at 2022-05-19 19:16:56 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.333: INFO: Container up-down-3 ready: true, restart count 0 May 19 19:17:46.333: INFO: pod-handle-http-request started at 2022-05-19 19:17:12 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.333: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:17:46.333: INFO: externalsvc-4xd9s started at 2022-05-19 19:16:57 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.333: INFO: Container externalsvc ready: true, restart count 0 May 19 19:17:46.333: INFO: ss-0 started at <nil> (0+0 container statuses recorded) May 19 19:17:46.333: INFO: pod-1 started at <nil> (0+0 container statuses recorded) May 19 19:17:46.333: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.333: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:17:46.333: INFO: verify-service-up-exec-pod-x9msv started at 2022-05-19 19:17:36 +0000 UTC (0+1 container statuses recorded) May 19 19:17:46.333: INFO: Container agnhost-container ready: false, restart count 0 May 19 19:17:46.333: INFO: webserver-deployment-847dcfb7fb-kp7z4 started at <nil> (0+0 container statuses recorded) May 19 19:17:46.657: INFO: Latency metrics for node kind-worker2 May 19 19:17:46.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6908" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sConntrack\sshould\sdrop\sINVALID\sconntrack\sentries$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:361 May 19 19:16:19.406: Unexpected error: <*errors.StatusError | 0xc001752d20>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103from junit_04.xml
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data","total":-1,"completed":25,"skipped":227,"failed":0} [BeforeEach] [sig-network] Conntrack /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:15:50.323: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename conntrack �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Conntrack /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:96 [It] should drop INVALID conntrack entries /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:361 May 19 19:15:50.421: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:52.430: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:54.425: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:56.430: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:58.425: INFO: The status of Pod boom-server is Pending, waiting for it to be Running (with Ready = true) May 19 19:16:00.426: INFO: The status of Pod boom-server is Running (Ready = true) �[1mSTEP�[0m: Server pod created on node kind-worker2 �[1mSTEP�[0m: Server service created May 19 19:16:00.484: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) May 19 19:16:02.487: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) May 19 19:16:04.495: INFO: The status of Pod startup-script is Pending, waiting for it to be Running (with Ready = true) May 19 19:16:19.406: FAIL: Unexpected error: <*errors.StatusError | 0xc001752d20>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc002e21d58, 0xc00008dc00, 0xb) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 +0xfe k8s.io/kubernetes/test/e2e/network.glob..func1.6() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/conntrack.go:449 +0xaf5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000327680) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000327680) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000327680, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-network] Conntrack /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "conntrack-7616". �[1mSTEP�[0m: Found 9 events. May 19 19:16:24.030: INFO: At 2022-05-19 19:15:50 +0000 UTC - event for boom-server: {default-scheduler } Scheduled: Successfully assigned conntrack-7616/boom-server to kind-worker2 May 19 19:16:24.030: INFO: At 2022-05-19 19:15:51 +0000 UTC - event for boom-server: {kubelet kind-worker2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2" May 19 19:16:24.030: INFO: At 2022-05-19 19:15:54 +0000 UTC - event for boom-server: {kubelet kind-worker2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2" in 2.579785143s May 19 19:16:24.030: INFO: At 2022-05-19 19:15:54 +0000 UTC - event for boom-server: {kubelet kind-worker2} Created: Created container boom-server May 19 19:16:24.030: INFO: At 2022-05-19 19:15:54 +0000 UTC - event for boom-server: {kubelet kind-worker2} Started: Started container boom-server May 19 19:16:24.030: INFO: At 2022-05-19 19:16:00 +0000 UTC - event for startup-script: {default-scheduler } Scheduled: Successfully assigned conntrack-7616/startup-script to kind-worker May 19 19:16:24.030: INFO: At 2022-05-19 19:16:01 +0000 UTC - event for startup-script: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine May 19 19:16:24.030: INFO: At 2022-05-19 19:16:01 +0000 UTC - event for startup-script: {kubelet kind-worker} Created: Created container startup-script May 19 19:16:24.030: INFO: At 2022-05-19 19:16:01 +0000 UTC - event for startup-script: {kubelet kind-worker} Started: Started container startup-script May 19 19:16:24.216: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.216: INFO: boom-server kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:50 +0000 UTC }] May 19 19:16:24.216: INFO: startup-script kind-worker Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:16:00 +0000 UTC }] May 19 19:16:24.216: INFO: May 19 19:16:24.318: INFO: Logging node info for node kind-control-plane May 19 19:16:24.415: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.416: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.511: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.642: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.642: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.642: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.642: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.642: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.642: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.642: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.642: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.642: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.642: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.642: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.642: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.642: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.642: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.642: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.642: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.642: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.642: INFO: Container etcd ready: true, restart count 0 May 19 19:16:25.097: INFO: Latency metrics for node kind-control-plane May 19 19:16:25.097: INFO: Logging node info for node kind-worker May 19 19:16:25.171: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.171: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.216: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.262: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.262: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.262: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.262: INFO: Container mock ready: true, restart count 0 May 19 19:16:25.262: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.262: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.262: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.262: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.262: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.262: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.262: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.262: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.262: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.262: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.262: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.262: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.262: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.262: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.262: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.263: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.263: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.263: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.263: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.263: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.263: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.263: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.263: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.263: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.263: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.263: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.263: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.263: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.263: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.263: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.263: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.263: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.263: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.263: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.263: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.263: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.177: INFO: Latency metrics for node kind-worker May 19 19:16:26.177: INFO: Logging node info for node kind-worker2 May 19 19:16:26.262: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.263: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.320: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.416: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.416: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.416: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.417: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.417: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.417: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.417: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.417: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.417: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.417: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.417: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.417: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.417: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.417: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.417: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.417: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.417: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.417: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.417: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.417: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.417: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.417: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.417: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.417: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.417: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.417: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.417: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.417: INFO: Container c ready: true, restart count 0 May 19 19:16:27.092: INFO: Latency metrics for node kind-worker2 May 19 19:16:27.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "conntrack-7616" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\sclient\sIP\sbased\ssession\saffinity\:\sudp\s\[LinuxOnly\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:426 May 19 19:16:19.403: Unexpected error: <*errors.StatusError | 0xc0039f10e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858from junit_15.xml
[BeforeEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:15:46.873: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename nettest �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for client IP based session affinity: udp [LinuxOnly] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:426 �[1mSTEP�[0m: Performing setup for networking test in namespace nettest-3676 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes May 19 19:15:46.952: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 19 19:15:47.035: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:49.040: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:51.039: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:53.050: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:55.060: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:57.039: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:59.040: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:16:01.065: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:16:03.072: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:16:05.062: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:16:19.403: FAIL: Unexpected error: <*errors.StatusError | 0xc0039f10e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc002bed180, 0x707e124, 0x9, 0xc0035eb440, 0x0, 0xc00030c000, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858 +0x4cd k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc002bed180, 0xc0035eb440) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:760 +0x7b k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc002bed180, 0xc0035eb440) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:775 +0x50 k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0013ba420, 0x0, 0x0, 0x0, 0xb9) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:129 +0x165 k8s.io/kubernetes/test/e2e/network.glob..func20.6.17() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:427 +0x4d k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000326d80) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000326d80) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000326d80, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "nettest-3676". �[1mSTEP�[0m: Found 8 events. May 19 19:16:24.054: INFO: At 2022-05-19 19:15:47 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-3676/netserver-0 to kind-worker May 19 19:16:24.054: INFO: At 2022-05-19 19:15:47 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-3676/netserver-1 to kind-worker2 May 19 19:16:24.054: INFO: At 2022-05-19 19:15:48 +0000 UTC - event for netserver-0: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.054: INFO: At 2022-05-19 19:15:48 +0000 UTC - event for netserver-0: {kubelet kind-worker} Created: Created container webserver May 19 19:16:24.054: INFO: At 2022-05-19 19:15:48 +0000 UTC - event for netserver-0: {kubelet kind-worker} Started: Started container webserver May 19 19:16:24.054: INFO: At 2022-05-19 19:15:48 +0000 UTC - event for netserver-1: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.054: INFO: At 2022-05-19 19:15:48 +0000 UTC - event for netserver-1: {kubelet kind-worker2} Created: Created container webserver May 19 19:16:24.054: INFO: At 2022-05-19 19:15:48 +0000 UTC - event for netserver-1: {kubelet kind-worker2} Started: Started container webserver May 19 19:16:24.217: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.217: INFO: netserver-0 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:46 +0000 UTC }] May 19 19:16:24.217: INFO: netserver-1 kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:47 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:47 +0000 UTC }] May 19 19:16:24.217: INFO: May 19 19:16:24.317: INFO: Logging node info for node kind-control-plane May 19 19:16:24.412: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.412: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.520: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.610: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.610: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.610: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.610: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.610: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.610: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.610: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.610: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.610: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:25.098: INFO: Latency metrics for node kind-control-plane May 19 19:16:25.098: INFO: Logging node info for node kind-worker May 19 19:16:25.151: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.151: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.175: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.231: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.231: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.231: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.231: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.231: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.231: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.231: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.231: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.231: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.231: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.231: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.231: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.231: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.231: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.231: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.231: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.231: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.232: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.232: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.232: INFO: Container agnhost-container ready: false, restart count 0 May 19 19:16:25.232: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.232: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.232: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.232: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.232: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.232: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.232: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.232: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.232: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.232: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.232: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.232: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.232: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.232: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.232: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.232: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.232: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.232: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.232: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.232: INFO: Container mock ready: true, restart count 0 May 19 19:16:26.482: INFO: Latency metrics for node kind-worker May 19 19:16:26.482: INFO: Logging node info for node kind-worker2 May 19 19:16:26.511: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.512: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.593: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.619: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.619: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.619: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.619: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.619: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.619: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.619: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.619: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.619: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.619: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.619: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.619: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.619: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.620: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.620: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.620: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.620: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container c ready: true, restart count 0 May 19 19:16:26.620: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.620: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.620: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.620: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.620: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.620: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.620: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.620: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.620: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.620: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.620: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.620: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:27.466: INFO: Latency metrics for node kind-worker2 May 19 19:16:27.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "nettest-3676" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\smultiple\sendpoint\-Services\swith\ssame\sselector$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:281 May 19 19:16:12.378: failed to get pod test-container-pod Unexpected error: <*errors.StatusError | 0xc002daadc0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:124from junit_08.xml
[BeforeEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:14:52.933: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename nettest �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for multiple endpoint-Services with same selector /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:281 �[1mSTEP�[0m: Performing setup for networking test in namespace nettest-533 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes May 19 19:14:52.972: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 19 19:14:53.037: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:14:55.051: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:14:57.058: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:14:59.040: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:01.051: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:03.063: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:05.048: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:07.041: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:09.056: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:11.060: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:13.040: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:15.065: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:17.044: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:19.040: INFO: The status of Pod netserver-0 is Running (Ready = true) May 19 19:15:19.045: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:21.064: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:23.069: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:25.070: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:27.056: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:29.064: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:31.054: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:33.064: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:35.056: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:37.075: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:39.066: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:41.048: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:43.058: INFO: The status of Pod netserver-1 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods May 19 19:15:53.120: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 �[1mSTEP�[0m: Getting node addresses May 19 19:15:53.120: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable �[1mSTEP�[0m: Creating the service on top of the pods in kubernetes May 19 19:15:53.317: INFO: Service node-port-service in namespace nettest-533 found. May 19 19:15:53.435: INFO: Service session-affinity-service in namespace nettest-533 found. �[1mSTEP�[0m: Waiting for NodePort service to expose endpoint May 19 19:15:54.438: INFO: Waiting for amount of service:node-port-service endpoints to be 2 �[1mSTEP�[0m: Waiting for Session Affinity service to expose endpoint May 19 19:15:55.441: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 �[1mSTEP�[0m: creating a second service with same selector May 19 19:15:55.529: INFO: Service second-node-port-service in namespace nettest-533 found. May 19 19:15:56.537: INFO: Waiting for amount of service:second-node-port-service endpoints to be 2 �[1mSTEP�[0m: dialing(http) netserver-0 (endpoint) --> 10.96.135.34:80 (config.clusterIP) May 19 19:15:56.549: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.234:8083/dial?request=hostname&protocol=http&host=10.96.135.34&port=80&tries=1'] Namespace:nettest-533 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:15:56.549: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:15:56.672: INFO: Waiting for responses: map[netserver-0:{}] May 19 19:15:58.675: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.234:8083/dial?request=hostname&protocol=http&host=10.96.135.34&port=80&tries=1'] Namespace:nettest-533 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:15:58.675: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:15:58.760: INFO: Waiting for responses: map[] May 19 19:15:58.760: INFO: reached 10.96.135.34 after 1/34 tries �[1mSTEP�[0m: dialing(http) netserver-0 (endpoint) --> 172.18.0.3:30376 (nodeIP) May 19 19:15:58.763: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.234:8083/dial?request=hostname&protocol=http&host=172.18.0.3&port=30376&tries=1'] Namespace:nettest-533 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:15:58.763: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:15:58.865: INFO: Waiting for responses: map[netserver-1:{}] May 19 19:16:00.911: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.234:8083/dial?request=hostname&protocol=http&host=172.18.0.3&port=30376&tries=1'] Namespace:nettest-533 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:16:00.911: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:16:01.072: INFO: Waiting for responses: map[netserver-1:{}] May 19 19:16:03.108: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.234:8083/dial?request=hostname&protocol=http&host=172.18.0.3&port=30376&tries=1'] Namespace:nettest-533 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:16:03.108: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:16:03.277: INFO: Waiting for responses: map[netserver-1:{}] May 19 19:16:05.279: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.234:8083/dial?request=hostname&protocol=http&host=172.18.0.3&port=30376&tries=1'] Namespace:nettest-533 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:16:05.280: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:16:05.372: INFO: Waiting for responses: map[] May 19 19:16:05.373: INFO: reached 172.18.0.3 after 3/34 tries �[1mSTEP�[0m: dialing(http) netserver-0 (endpoint) --> 10.96.255.57:80 (svc2.clusterIP) May 19 19:16:12.378: FAIL: failed to get pod test-container-pod Unexpected error: <*errors.StatusError | 0xc002daadc0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).execCommandInPodWithFullOutput(0xc0012d51e0, 0xc0034ee780, 0x12, 0xc002d672f0, 0x3, 0x3, 0x6e, 0xc00494f068, 0x11, 0x203000, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:124 +0x16e k8s.io/kubernetes/test/e2e/framework.(*Framework).ExecShellInPodWithFullOutput(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:136 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc002c7e1c0, 0x7071fd3, 0x4, 0x707ad52, 0x8, 0xc002d70440, 0xc, 0xc002fdb914, 0xc, 0x1f93, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:393 +0x37d k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).DialFromContainer(0xc002c7e1c0, 0x7071fd3, 0x4, 0x707ad52, 0x8, 0xc002d70440, 0xc, 0xc002fdb914, 0xc, 0x1f93, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:308 +0x392 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).DialFromEndpointContainer(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:216 k8s.io/kubernetes/test/e2e/network.glob..func20.6.10() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:299 +0x774 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000369080) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000369080) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000369080, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "nettest-533". �[1mSTEP�[0m: Found 12 events. May 19 19:16:24.020: INFO: At 2022-05-19 19:14:53 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-533/netserver-0 to kind-worker May 19 19:16:24.020: INFO: At 2022-05-19 19:14:53 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-533/netserver-1 to kind-worker2 May 19 19:16:24.020: INFO: At 2022-05-19 19:14:54 +0000 UTC - event for netserver-1: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.020: INFO: At 2022-05-19 19:14:55 +0000 UTC - event for netserver-0: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.020: INFO: At 2022-05-19 19:14:55 +0000 UTC - event for netserver-0: {kubelet kind-worker} Created: Created container webserver May 19 19:16:24.020: INFO: At 2022-05-19 19:14:55 +0000 UTC - event for netserver-0: {kubelet kind-worker} Started: Started container webserver May 19 19:16:24.020: INFO: At 2022-05-19 19:14:55 +0000 UTC - event for netserver-1: {kubelet kind-worker2} Created: Created container webserver May 19 19:16:24.021: INFO: At 2022-05-19 19:14:55 +0000 UTC - event for netserver-1: {kubelet kind-worker2} Started: Started container webserver May 19 19:16:24.021: INFO: At 2022-05-19 19:15:43 +0000 UTC - event for test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-533/test-container-pod to kind-worker May 19 19:16:24.021: INFO: At 2022-05-19 19:15:43 +0000 UTC - event for test-container-pod: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.021: INFO: At 2022-05-19 19:15:44 +0000 UTC - event for test-container-pod: {kubelet kind-worker} Created: Created container webserver May 19 19:16:24.021: INFO: At 2022-05-19 19:15:44 +0000 UTC - event for test-container-pod: {kubelet kind-worker} Started: Started container webserver May 19 19:16:24.216: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.216: INFO: netserver-0 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:52 +0000 UTC }] May 19 19:16:24.216: INFO: netserver-1 kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:53 +0000 UTC }] May 19 19:16:24.216: INFO: test-container-pod kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:44 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:43 +0000 UTC }] May 19 19:16:24.217: INFO: May 19 19:16:24.318: INFO: Logging node info for node kind-control-plane May 19 19:16:24.412: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.413: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.520: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.610: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.610: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.610: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.610: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.610: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.610: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.610: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.610: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.610: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.610: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.997: INFO: Latency metrics for node kind-control-plane May 19 19:16:24.997: INFO: Logging node info for node kind-worker May 19 19:16:25.018: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.019: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.081: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.162: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.162: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.162: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.162: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.162: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.162: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.162: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.162: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.162: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.162: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.162: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.162: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.162: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.162: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.162: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.162: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.162: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.162: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.162: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.163: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.163: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.163: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.163: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.163: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.163: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.163: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.163: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.163: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.163: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.163: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.163: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.163: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container agnhost-container ready: false, restart count 0 May 19 19:16:25.163: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.163: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.163: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.163: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.163: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.163: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.163: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.163: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.163: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.163: INFO: Container mock ready: true, restart count 0 May 19 19:16:25.902: INFO: Latency metrics for node kind-worker May 19 19:16:25.903: INFO: Logging node info for node kind-worker2 May 19 19:16:25.930: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:25.930: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:25.981: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.007: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.007: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.007: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.007: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.007: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.007: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.008: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.008: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.008: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.008: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.008: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.008: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.008: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.008: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.008: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.008: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.008: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.008: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.008: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.008: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.008: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.008: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.008: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.008: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.008: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.008: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.008: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.008: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.008: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.008: INFO: Container c ready: true, restart count 0 May 19 19:16:26.799: INFO: Latency metrics for node kind-worker2 May 19 19:16:26.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "nettest-533" for this suite.
Find test-container-pod mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\sservice\sendpoints\susing\shostNetwork$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:466 May 19 19:16:19.400: failed to get pod test-container-pod Unexpected error: <*errors.StatusError | 0xc0025581e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:124from junit_14.xml
[BeforeEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:14:57.476: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename nettest �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for service endpoints using hostNetwork /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:466 �[1mSTEP�[0m: Performing setup for networking test in namespace nettest-3456 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes May 19 19:14:57.639: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 19 19:14:57.772: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:14:59.787: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:01.776: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:03.782: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:05.777: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:07.817: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:09.776: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:11.795: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:13.777: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:15.777: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:17.776: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:19.779: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:21.779: INFO: The status of Pod netserver-0 is Running (Ready = false) May 19 19:15:23.788: INFO: The status of Pod netserver-0 is Running (Ready = true) May 19 19:15:23.811: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) May 19 19:15:25.844: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:27.816: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:29.816: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:31.816: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:33.822: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:35.816: INFO: The status of Pod netserver-1 is Running (Ready = false) May 19 19:15:37.832: INFO: The status of Pod netserver-1 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods May 19 19:15:53.946: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 �[1mSTEP�[0m: Getting node addresses May 19 19:15:53.946: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable �[1mSTEP�[0m: Creating the service on top of the pods in kubernetes May 19 19:15:54.060: INFO: Service node-port-service in namespace nettest-3456 found. May 19 19:15:54.232: INFO: Service session-affinity-service in namespace nettest-3456 found. �[1mSTEP�[0m: Waiting for NodePort service to expose endpoint May 19 19:15:55.270: INFO: Waiting for amount of service:node-port-service endpoints to be 2 �[1mSTEP�[0m: Waiting for Session Affinity service to expose endpoint May 19 19:15:56.278: INFO: Waiting for amount of service:session-affinity-service endpoints to be 2 �[1mSTEP�[0m: pod-Service(hostNetwork): http �[1mSTEP�[0m: dialing(http) test-container-pod --> 10.96.41.144:80 (config.clusterIP) May 19 19:15:56.302: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:9080/dial?request=hostname&protocol=http&host=10.96.41.144&port=80&tries=1'] Namespace:nettest-3456 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:15:56.302: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:15:56.425: INFO: Waiting for responses: map[kind-worker:{}] May 19 19:15:58.428: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:9080/dial?request=hostname&protocol=http&host=10.96.41.144&port=80&tries=1'] Namespace:nettest-3456 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:15:58.428: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:15:58.513: INFO: Waiting for responses: map[] May 19 19:15:58.513: INFO: reached 10.96.41.144 after 1/34 tries �[1mSTEP�[0m: dialing(http) test-container-pod --> 172.18.0.3:31417 (nodeIP) May 19 19:15:58.517: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:9080/dial?request=hostname&protocol=http&host=172.18.0.3&port=31417&tries=1'] Namespace:nettest-3456 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:15:58.517: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:15:58.602: INFO: Waiting for responses: map[kind-worker2:{}] May 19 19:16:00.606: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:9080/dial?request=hostname&protocol=http&host=172.18.0.3&port=31417&tries=1'] Namespace:nettest-3456 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:16:00.606: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:16:00.782: INFO: Waiting for responses: map[kind-worker2:{}] May 19 19:16:02.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:9080/dial?request=hostname&protocol=http&host=172.18.0.3&port=31417&tries=1'] Namespace:nettest-3456 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:16:02.825: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:16:03.004: INFO: Waiting for responses: map[] May 19 19:16:03.004: INFO: reached 172.18.0.3 after 2/34 tries �[1mSTEP�[0m: pod-Service(hostNetwork): udp �[1mSTEP�[0m: dialing(udp) test-container-pod --> 10.96.41.144:90 (config.clusterIP) May 19 19:16:03.019: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:9080/dial?request=hostname&protocol=udp&host=10.96.41.144&port=90&tries=1'] Namespace:nettest-3456 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:16:03.019: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:16:03.197: INFO: Waiting for responses: map[kind-worker2:{}] May 19 19:16:05.202: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:9080/dial?request=hostname&protocol=udp&host=10.96.41.144&port=90&tries=1'] Namespace:nettest-3456 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:16:05.202: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:16:05.302: INFO: Waiting for responses: map[] May 19 19:16:05.302: INFO: reached 10.96.41.144 after 1/34 tries �[1mSTEP�[0m: dialing(udp) test-container-pod --> 172.18.0.3:30125 (nodeIP) May 19 19:16:05.305: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.253:9080/dial?request=hostname&protocol=udp&host=172.18.0.3&port=30125&tries=1'] Namespace:nettest-3456 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 19 19:16:05.305: INFO: >>> kubeConfig: /root/.kube/kind-test-config May 19 19:16:05.383: INFO: Waiting for responses: map[kind-worker2:{}] May 19 19:16:19.400: FAIL: failed to get pod test-container-pod Unexpected error: <*errors.StatusError | 0xc0025581e0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).execCommandInPodWithFullOutput(0xc000d6a2c0, 0xc001ec6750, 0x12, 0xc0017092f0, 0x3, 0x3, 0x6e, 0xc000aaf860, 0x11, 0x47, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:124 +0x16e k8s.io/kubernetes/test/e2e/framework.(*Framework).ExecShellInPodWithFullOutput(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:136 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).GetResponseFromContainer(0xc000ca4620, 0x707138b, 0x3, 0x707ad52, 0x8, 0xc002405750, 0xc, 0xc002aa26a0, 0xa, 0x2378, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:393 +0x37d k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).DialFromContainer(0xc000ca4620, 0x707138b, 0x3, 0x707ad52, 0x8, 0xc002405750, 0xc, 0xc002aa26a0, 0xa, 0x2378, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:308 +0x392 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).DialFromTestContainer(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:221 k8s.io/kubernetes/test/e2e/network.glob..func20.6.20() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:491 +0x966 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000329680) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000329680) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000329680, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-network] Networking /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "nettest-3456". �[1mSTEP�[0m: Found 16 events. May 19 19:16:24.054: INFO: At 2022-05-19 19:14:57 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-3456/netserver-0 to kind-worker May 19 19:16:24.054: INFO: At 2022-05-19 19:14:57 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-3456/netserver-1 to kind-worker2 May 19 19:16:24.054: INFO: At 2022-05-19 19:14:59 +0000 UTC - event for netserver-0: {kubelet kind-worker} Started: Started container webserver May 19 19:16:24.054: INFO: At 2022-05-19 19:14:59 +0000 UTC - event for netserver-0: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.054: INFO: At 2022-05-19 19:14:59 +0000 UTC - event for netserver-0: {kubelet kind-worker} Created: Created container webserver May 19 19:16:24.054: INFO: At 2022-05-19 19:14:59 +0000 UTC - event for netserver-1: {kubelet kind-worker2} Created: Created container webserver May 19 19:16:24.054: INFO: At 2022-05-19 19:14:59 +0000 UTC - event for netserver-1: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.054: INFO: At 2022-05-19 19:15:00 +0000 UTC - event for netserver-1: {kubelet kind-worker2} Started: Started container webserver May 19 19:16:24.054: INFO: At 2022-05-19 19:15:37 +0000 UTC - event for host-test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-3456/host-test-container-pod to kind-worker May 19 19:16:24.054: INFO: At 2022-05-19 19:15:37 +0000 UTC - event for test-container-pod: {default-scheduler } Scheduled: Successfully assigned nettest-3456/test-container-pod to kind-worker May 19 19:16:24.054: INFO: At 2022-05-19 19:15:39 +0000 UTC - event for host-test-container-pod: {kubelet kind-worker} Started: Started container agnhost-container May 19 19:16:24.054: INFO: At 2022-05-19 19:15:39 +0000 UTC - event for host-test-container-pod: {kubelet kind-worker} Created: Created container agnhost-container May 19 19:16:24.054: INFO: At 2022-05-19 19:15:39 +0000 UTC - event for host-test-container-pod: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.054: INFO: At 2022-05-19 19:15:39 +0000 UTC - event for test-container-pod: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.054: INFO: At 2022-05-19 19:15:39 +0000 UTC - event for test-container-pod: {kubelet kind-worker} Created: Created container webserver May 19 19:16:24.054: INFO: At 2022-05-19 19:15:39 +0000 UTC - event for test-container-pod: {kubelet kind-worker} Started: Started container webserver May 19 19:16:24.219: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.219: INFO: host-test-container-pod kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:37 +0000 UTC }] May 19 19:16:24.219: INFO: netserver-0 kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:57 +0000 UTC }] May 19 19:16:24.219: INFO: netserver-1 kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:57 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:17 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:57 +0000 UTC }] May 19 19:16:24.219: INFO: test-container-pod kind-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:39 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:39 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:37 +0000 UTC }] May 19 19:16:24.219: INFO: May 19 19:16:24.316: INFO: Logging node info for node kind-control-plane May 19 19:16:24.413: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.414: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.513: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.649: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.649: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.649: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.649: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.649: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.649: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.649: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.649: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.649: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.649: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.649: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.649: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.649: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.649: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.649: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.649: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.649: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.649: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.099: INFO: Latency metrics for node kind-control-plane May 19 19:16:25.099: INFO: Logging node info for node kind-worker May 19 19:16:25.172: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.172: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.210: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.339: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.339: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.339: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.339: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.339: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.339: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.339: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.339: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.339: INFO: Container mock ready: true, restart count 0 May 19 19:16:25.339: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.339: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.339: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.339: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.339: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.339: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.339: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.339: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.339: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.339: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.339: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.339: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.339: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.339: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.339: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.339: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.339: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.339: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.339: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.339: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.339: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.339: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.339: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.339: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.400: INFO: Latency metrics for node kind-worker May 19 19:16:26.400: INFO: Logging node info for node kind-worker2 May 19 19:16:26.434: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.435: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.461: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.566: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.566: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.566: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.566: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.566: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.566: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.566: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.566: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.566: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.566: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.566: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.566: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.566: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.566: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.566: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.566: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.566: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.566: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.566: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.566: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.566: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.566: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.566: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container c ready: true, restart count 0 May 19 19:16:26.566: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.566: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.567: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.567: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.567: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.567: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:27.331: INFO: Latency metrics for node kind-worker2 May 19 19:16:27.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "nettest-3456" for this suite.
Find test-container-pod mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 19 19:16:19.383: getting pod Unexpected error: <*errors.StatusError | 0xc001314c80>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:701from junit_11.xml
[BeforeEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:15:23.342: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a in namespace container-probe-3305 May 19 19:15:49.503: INFO: Started pod busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a in namespace container-probe-3305 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present May 19 19:15:49.505: INFO: Initial restart count of pod busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a is 0 May 19 19:16:19.383: FAIL: getting pod Unexpected error: <*errors.StatusError | 0xc001314c80>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000a022c0, 0xc002a57800, 0x0, 0x37e11d6000) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:701 +0xbaa k8s.io/kubernetes/test/e2e/common/node.glob..func2.5() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:148 +0x19e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0006dfc80) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0006dfc80) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc0006dfc80, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "container-probe-3305". �[1mSTEP�[0m: Found 4 events. May 19 19:16:24.054: INFO: At 2022-05-19 19:15:23 +0000 UTC - event for busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a: {default-scheduler } Scheduled: Successfully assigned container-probe-3305/busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a to kind-worker2 May 19 19:16:24.054: INFO: At 2022-05-19 19:15:24 +0000 UTC - event for busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine May 19 19:16:24.054: INFO: At 2022-05-19 19:15:25 +0000 UTC - event for busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a: {kubelet kind-worker2} Created: Created container busybox May 19 19:16:24.054: INFO: At 2022-05-19 19:15:25 +0000 UTC - event for busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a: {kubelet kind-worker2} Started: Started container busybox May 19 19:16:24.217: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.217: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:23 +0000 UTC }] May 19 19:16:24.217: INFO: May 19 19:16:24.316: INFO: Logging node info for node kind-control-plane May 19 19:16:24.418: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.418: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.515: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.652: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.652: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.652: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.652: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.652: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.652: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.652: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.652: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.652: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.652: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.652: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.652: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.652: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.652: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.652: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.652: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.652: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.652: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:25.136: INFO: Latency metrics for node kind-control-plane May 19 19:16:25.136: INFO: Logging node info for node kind-worker May 19 19:16:25.173: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.174: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.210: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.259: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.259: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.259: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.259: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.259: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.259: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.259: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.259: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.259: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.259: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.259: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.259: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.259: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.259: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.259: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.259: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.259: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.259: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.259: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.259: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.259: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.259: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.259: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.259: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.259: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.259: INFO: Container mock ready: true, restart count 0 May 19 19:16:25.259: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.259: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.259: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.259: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.259: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.259: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.259: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.364: INFO: Latency metrics for node kind-worker May 19 19:16:26.364: INFO: Logging node info for node kind-worker2 May 19 19:16:26.398: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.399: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.430: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.490: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.490: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.490: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.490: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.490: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.490: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.490: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.490: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.490: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.490: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.490: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.490: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.490: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.490: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.491: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.491: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.491: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.491: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.491: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.491: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.491: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.491: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.491: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.491: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.491: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.491: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.491: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.491: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.491: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.491: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.491: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.491: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.491: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.491: INFO: Container c ready: true, restart count 0 May 19 19:16:27.337: INFO: Latency metrics for node kind-worker2 May 19 19:16:27.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3305" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sProbing\scontainer\sshould\shave\smonotonically\sincreasing\srestart\scount\s\[NodeConformance\]\s\[Conformance\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 19 19:16:19.410: getting pod Unexpected error: <*errors.StatusError | 0xc0036d0be0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:701from junit_24.xml
[BeforeEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:14:22.908: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod liveness-188ed752-b6d5-4d2d-8753-f3495434988f in namespace container-probe-5215 May 19 19:14:36.961: INFO: Started pod liveness-188ed752-b6d5-4d2d-8753-f3495434988f in namespace container-probe-5215 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present May 19 19:14:36.972: INFO: Initial restart count of pod liveness-188ed752-b6d5-4d2d-8753-f3495434988f is 0 May 19 19:14:57.101: INFO: Restart count of pod container-probe-5215/liveness-188ed752-b6d5-4d2d-8753-f3495434988f is now 1 (20.128856094s elapsed) May 19 19:15:25.192: INFO: Restart count of pod container-probe-5215/liveness-188ed752-b6d5-4d2d-8753-f3495434988f is now 2 (48.219421866s elapsed) May 19 19:15:37.267: INFO: Restart count of pod container-probe-5215/liveness-188ed752-b6d5-4d2d-8753-f3495434988f is now 3 (1m0.294697615s elapsed) May 19 19:15:59.359: INFO: Restart count of pod container-probe-5215/liveness-188ed752-b6d5-4d2d-8753-f3495434988f is now 4 (1m22.387028035s elapsed) May 19 19:16:19.410: FAIL: getting pod Unexpected error: <*errors.StatusError | 0xc0036d0be0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc00047fe40, 0xc0079b9c00, 0x5, 0x45d964b800) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:701 +0xbaa k8s.io/kubernetes/test/e2e/common/node.glob..func2.8() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:193 +0x156 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0008fbc80) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0008fbc80) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc0008fbc80, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "container-probe-5215". �[1mSTEP�[0m: Found 6 events. May 19 19:16:24.048: INFO: At 2022-05-19 19:14:22 +0000 UTC - event for liveness-188ed752-b6d5-4d2d-8753-f3495434988f: {default-scheduler } Scheduled: Successfully assigned container-probe-5215/liveness-188ed752-b6d5-4d2d-8753-f3495434988f to kind-worker2 May 19 19:16:24.048: INFO: At 2022-05-19 19:14:24 +0000 UTC - event for liveness-188ed752-b6d5-4d2d-8753-f3495434988f: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.048: INFO: At 2022-05-19 19:14:24 +0000 UTC - event for liveness-188ed752-b6d5-4d2d-8753-f3495434988f: {kubelet kind-worker2} Created: Created container agnhost-container May 19 19:16:24.048: INFO: At 2022-05-19 19:14:24 +0000 UTC - event for liveness-188ed752-b6d5-4d2d-8753-f3495434988f: {kubelet kind-worker2} Started: Started container agnhost-container May 19 19:16:24.048: INFO: At 2022-05-19 19:14:42 +0000 UTC - event for liveness-188ed752-b6d5-4d2d-8753-f3495434988f: {kubelet kind-worker2} Unhealthy: Liveness probe failed: HTTP probe failed with statuscode: 500 May 19 19:16:24.048: INFO: At 2022-05-19 19:14:42 +0000 UTC - event for liveness-188ed752-b6d5-4d2d-8753-f3495434988f: {kubelet kind-worker2} Killing: Container agnhost-container failed liveness probe, will be restarted May 19 19:16:24.220: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.220: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f kind-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:14:22 +0000 UTC }] May 19 19:16:24.220: INFO: May 19 19:16:24.316: INFO: Logging node info for node kind-control-plane May 19 19:16:24.413: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.413: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.519: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.639: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.639: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.639: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.639: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.639: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.639: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.639: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.640: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.640: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.640: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.640: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.640: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.640: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.640: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.640: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.640: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.640: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.640: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.118: INFO: Latency metrics for node kind-control-plane May 19 19:16:25.118: INFO: Logging node info for node kind-worker May 19 19:16:25.170: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.170: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.211: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.297: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.297: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.297: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.297: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.297: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.297: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.297: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.297: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.297: INFO: Container mock ready: true, restart count 0 May 19 19:16:25.297: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.297: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.297: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.297: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.297: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.297: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.297: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.297: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.297: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.297: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.297: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.297: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.297: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.297: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.297: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.297: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.297: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.297: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.297: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.297: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.298: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.298: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.298: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.298: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:26.375: INFO: Latency metrics for node kind-worker May 19 19:16:26.375: INFO: Logging node info for node kind-worker2 May 19 19:16:26.459: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.460: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.556: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.586: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.586: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.586: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.586: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.586: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.586: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.586: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.586: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.586: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.586: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.586: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.586: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.586: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.586: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.586: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.586: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.586: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.586: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.586: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.586: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.586: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.586: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.586: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.586: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.586: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container c ready: true, restart count 0 May 19 19:16:26.586: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.586: INFO: Container webserver ready: false, restart count 0 May 19 19:16:27.439: INFO: Latency metrics for node kind-worker2 May 19 19:16:27.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-5215" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sEmptyDir\swrapper\svolumes\sshould\snot\sconflict\s\[Conformance\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 19 19:16:19.382: Unexpected error: <*errors.StatusError | 0xc003436500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103from junit_05.xml
[BeforeEach] [sig-storage] EmptyDir wrapper volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:16:03.970: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 19 19:16:04.071: INFO: The status of Pod pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 is Pending, waiting for it to be Running (with Ready = true) May 19 19:16:19.382: FAIL: Unexpected error: <*errors.StatusError | 0xc003436500>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc003fbda70, 0xc00157c800, 0xc) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:103 +0xfe k8s.io/kubernetes/test/e2e/storage.glob..func4.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/empty_dir_wrapper.go:147 +0x985 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0009b0780) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0009b0780) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc0009b0780, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-storage] EmptyDir wrapper volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "emptydir-wrapper-4987". �[1mSTEP�[0m: Found 2 events. May 19 19:16:24.055: INFO: At 2022-05-19 19:16:04 +0000 UTC - event for pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0: {default-scheduler } Scheduled: Successfully assigned emptydir-wrapper-4987/pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 to kind-worker2 May 19 19:16:24.055: INFO: At 2022-05-19 19:16:19 +0000 UTC - event for pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0: {kubelet kind-worker2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-hbs88" : failed to fetch token: etcdserver: request timed out May 19 19:16:24.216: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.216: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 kind-worker2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:16:04 +0000 UTC }] May 19 19:16:24.216: INFO: May 19 19:16:24.319: INFO: Logging node info for node kind-control-plane May 19 19:16:24.415: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.416: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.520: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.633: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.633: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.633: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.633: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.633: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.633: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.633: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.633: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.633: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.633: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.633: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.634: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.634: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.634: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.634: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.634: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.634: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.634: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.053: INFO: Latency metrics for node kind-control-plane May 19 19:16:25.053: INFO: Logging node info for node kind-worker May 19 19:16:25.085: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.085: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.162: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.229: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.229: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.229: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.229: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.229: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.229: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.229: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.229: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.229: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.229: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.229: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container agnhost-container ready: false, restart count 0 May 19 19:16:25.229: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.229: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.229: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.229: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.229: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.229: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.229: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.229: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.229: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.229: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.229: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.229: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.229: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.229: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.229: INFO: Container mock ready: true, restart count 0 May 19 19:16:25.229: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.229: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.229: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.229: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.229: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.229: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.229: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.275: INFO: Latency metrics for node kind-worker May 19 19:16:26.275: INFO: Logging node info for node kind-worker2 May 19 19:16:26.320: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.321: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.367: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.454: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container c ready: true, restart count 0 May 19 19:16:26.454: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.454: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.454: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.454: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.454: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.454: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.454: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.454: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.454: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.454: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.454: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.454: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.454: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.454: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.454: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.454: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.454: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.454: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.455: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.455: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.455: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.455: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.455: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.455: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.455: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.455: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.455: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.455: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.455: INFO: Container donothing ready: false, restart count 0 May 19 19:16:27.279: INFO: Latency metrics for node kind-worker2 May 19 19:16:27.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-4987" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-bindmounted\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\svolumes\sshould\sstore\sdata$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159 May 19 19:16:19.409: Unexpected error: <*errors.StatusError | 0xc003146280>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110from junit_02.xml
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":28,"skipped":285,"failed":0} [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:15:57.860: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename volume �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should store data /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:159 May 19 19:15:57.970: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics May 19 19:16:19.408: FAIL: Unexpected error: <*errors.StatusError | 0xc003146280>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0xc004696590, 0xc003d02b95, 0xb, 0xb) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110 +0x4b9 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc004696590, 0xc002256540, 0xba, 0xc003d50600, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:136 +0x3b5 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0xc004696590, 0xc002256540, 0xba, 0xc003d50600, 0x3, 0xba, 0xc002256540, 0xc0039bc000) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:169 +0x99 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0xc004696590, 0xc002256540, 0xba, 0xc003d50600, 0x3, 0xc002256540) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:178 +0x49 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryBindMounted(0xc003d5e120, 0xc003d50600, 0x0, 0x203000) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:237 +0x14b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc003d5e120, 0xc003d50600, 0x7094dfe, 0xf, 0x0, 0x68519e0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:300 +0x47b k8s.io/kubernetes/test/e2e/storage/drivers.(*localDriver).CreateVolume(0xc0010a5800, 0xc003d1b440, 0x7098411, 0x10, 0xc0010a5800, 0x6d57d01) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1941 +0x144 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolume(0x7911668, 0xc0010a5800, 0xc003d1b440, 0x7098411, 0x10, 0xc00284f180, 0x2199bc5) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/driver_operations.go:43 +0x222 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource(0x7911668, 0xc0010a5800, 0xc003d1b440, 0x70f7855, 0x1f, 0x0, 0x0, 0x7098411, 0x10, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/volume_resource.go:65 +0x1e5 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:140 +0x2a5 k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func3() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:160 +0xae k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0003fac00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0003fac00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc0003fac00, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "volume-1374". �[1mSTEP�[0m: Found 4 events. May 19 19:16:24.022: INFO: At 2022-05-19 19:15:57 +0000 UTC - event for hostexec-kind-worker-f96wj: {default-scheduler } Scheduled: Successfully assigned volume-1374/hostexec-kind-worker-f96wj to kind-worker May 19 19:16:24.022: INFO: At 2022-05-19 19:15:59 +0000 UTC - event for hostexec-kind-worker-f96wj: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.022: INFO: At 2022-05-19 19:15:59 +0000 UTC - event for hostexec-kind-worker-f96wj: {kubelet kind-worker} Created: Created container agnhost-container May 19 19:16:24.022: INFO: At 2022-05-19 19:15:59 +0000 UTC - event for hostexec-kind-worker-f96wj: {kubelet kind-worker} Started: Started container agnhost-container May 19 19:16:24.034: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.034: INFO: hostexec-kind-worker-f96wj kind-worker Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:57 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:57 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:57 +0000 UTC }] May 19 19:16:24.034: INFO: May 19 19:16:24.215: INFO: Logging node info for node kind-control-plane May 19 19:16:24.309: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.310: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.418: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.526: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.526: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.526: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.526: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.526: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.526: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.526: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.526: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.526: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.694: INFO: Latency metrics for node kind-control-plane May 19 19:16:24.694: INFO: Logging node info for node kind-worker May 19 19:16:24.761: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.762: INFO: Logging kubelet events for node kind-worker May 19 19:16:24.820: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:24.935: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:24.935: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container agnhost-container ready: false, restart count 0 May 19 19:16:24.935: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.935: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:24.935: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:24.935: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:24.935: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:24.935: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container webserver ready: true, restart count 0 May 19 19:16:24.935: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container webserver ready: true, restart count 0 May 19 19:16:24.935: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:24.935: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:24.935: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container webserver ready: true, restart count 0 May 19 19:16:24.935: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:24.935: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:24.935: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:24.935: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:24.935: INFO: Container mock ready: true, restart count 0 May 19 19:16:24.935: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container webserver ready: true, restart count 0 May 19 19:16:24.935: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:24.935: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container webserver ready: true, restart count 0 May 19 19:16:24.935: INFO: hostexec-kind-worker-9n75g started at <nil> (0+0 container statuses recorded) May 19 19:16:24.935: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:24.935: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:24.935: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:24.935: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container httpd ready: false, restart count 0 May 19 19:16:24.935: INFO: startup-script started at <nil> (0+0 container statuses recorded) May 19 19:16:24.935: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:24.935: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:24.935: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:24.935: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.935: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:24.935: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.935: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.346: INFO: Latency metrics for node kind-worker May 19 19:16:25.347: INFO: Logging node info for node kind-worker2 May 19 19:16:25.367: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:25.367: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:25.379: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:25.398: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container client-container ready: false, restart count 0 May 19 19:16:25.398: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.398: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:25.398: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.398: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.398: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.398: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:25.398: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:25.398: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:25.398: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:25.398: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:25.398: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.398: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container donothing ready: false, restart count 0 May 19 19:16:25.398: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:25.398: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container c ready: true, restart count 0 May 19 19:16:25.398: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container webserver ready: false, restart count 0 May 19 19:16:25.398: INFO: up-down-1-5bpp7 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.398: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container busybox ready: true, restart count 0 May 19 19:16:25.398: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.398: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:25.398: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.398: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.398: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.398: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.398: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.398: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.398: INFO: Container webserver ready: false, restart count 0 May 19 19:16:25.398: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.972: INFO: Latency metrics for node kind-worker2 May 19 19:16:25.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volume-1374" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sIn\-tree\sVolumes\s\[Driver\:\slocal\]\[LocalVolumeType\:\sdir\-link\]\s\[Testpattern\:\sPre\-provisioned\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sreadOnly\sfile\sspecified\sin\sthe\svolumeMount\s\[LinuxOnly\]$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380 May 19 19:16:19.384: Unexpected error: <*errors.StatusError | 0xc0007e9cc0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110from junit_07.xml
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:16:00.943: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename provisioning �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support readOnly file specified in the volumeMount [LinuxOnly] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:380 May 19 19:16:01.109: INFO: In-tree plugin kubernetes.io/local-volume is not migrated, not validating any metrics May 19 19:16:19.384: FAIL: Unexpected error: <*errors.StatusError | 0xc0007e9cc0>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0xc002d41550, 0xc004579615, 0xb, 0xb) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110 +0x4b9 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc002d41550, 0xc00276c340, 0xc3, 0xc003e71800, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:136 +0x3b5 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0xc002d41550, 0xc00276c340, 0xc3, 0xc003e71800, 0x3, 0xc3, 0xc00276c340, 0xc002e080d0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:169 +0x99 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0xc002d41550, 0xc00276c340, 0xc3, 0xc003e71800, 0x3, 0xc00276c340) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:178 +0x49 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryLink(0xc0031dcff0, 0xc003e71800, 0x0, 0x203001) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:217 +0x194 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc0031dcff0, 0xc003e71800, 0x707aa4a, 0x8, 0x0, 0x68519e0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:298 +0x3f4 k8s.io/kubernetes/test/e2e/storage/drivers.(*localDriver).CreateVolume(0xc002582900, 0xc001a17860, 0x7098411, 0x10, 0xc002582900, 0x6d57d01) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/drivers/in_tree.go:1941 +0x144 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolume(0x7911668, 0xc002582900, 0xc001a17860, 0x7098411, 0x10, 0xc004a1a000, 0x2199bc5) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/driver_operations.go:43 +0x222 k8s.io/kubernetes/test/e2e/storage/framework.CreateVolumeResource(0x7911668, 0xc002582900, 0xc001a17860, 0x70f7855, 0x1f, 0x0, 0x0, 0x7098411, 0x10, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/framework/volume_resource.go:65 +0x1e5 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:127 +0x2c5 k8s.io/kubernetes/test/e2e/storage/testsuites.(*subPathTestSuite).DefineTests.func17() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:381 +0x6a k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00022f680) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00022f680) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc00022f680, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Testpattern: Pre-provisioned PV (default fs)] subPath /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "provisioning-5306". �[1mSTEP�[0m: Found 4 events. May 19 19:16:24.047: INFO: At 2022-05-19 19:16:01 +0000 UTC - event for hostexec-kind-worker-9n75g: {default-scheduler } Scheduled: Successfully assigned provisioning-5306/hostexec-kind-worker-9n75g to kind-worker May 19 19:16:24.047: INFO: At 2022-05-19 19:16:02 +0000 UTC - event for hostexec-kind-worker-9n75g: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.047: INFO: At 2022-05-19 19:16:02 +0000 UTC - event for hostexec-kind-worker-9n75g: {kubelet kind-worker} Created: Created container agnhost-container May 19 19:16:24.047: INFO: At 2022-05-19 19:16:02 +0000 UTC - event for hostexec-kind-worker-9n75g: {kubelet kind-worker} Started: Started container agnhost-container May 19 19:16:24.221: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.221: INFO: hostexec-kind-worker-9n75g kind-worker Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:16:01 +0000 UTC }] May 19 19:16:24.221: INFO: May 19 19:16:24.312: INFO: Logging node info for node kind-control-plane May 19 19:16:24.411: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.411: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.519: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.653: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.653: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.653: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.653: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.653: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.653: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.653: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.653: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.653: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.653: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.653: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.653: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.653: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.653: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.653: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.653: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.653: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.653: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.144: INFO: Latency metrics for node kind-control-plane May 19 19:16:25.144: INFO: Logging node info for node kind-worker May 19 19:16:25.170: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:25.171: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.216: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.294: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.294: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.294: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.294: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.294: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.294: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.294: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.294: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.294: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.294: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.294: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.294: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.294: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.294: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.294: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.295: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.295: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.295: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.295: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.295: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.295: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.295: INFO: Container mock ready: true, restart count 0 May 19 19:16:25.295: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.295: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.295: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.295: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.295: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.295: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.295: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.295: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:25.295: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.295: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.295: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.295: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.295: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.295: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.295: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.295: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.229: INFO: Latency metrics for node kind-worker May 19 19:16:26.229: INFO: Logging node info for node kind-worker2 May 19 19:16:26.286: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.287: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.320: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.402: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.403: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.403: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.403: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.403: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.403: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.403: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.403: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.403: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.403: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.403: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.403: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.403: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.403: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.403: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.403: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.403: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.403: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.403: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.403: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.403: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.403: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.403: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.403: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.403: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container c ready: true, restart count 0 May 19 19:16:26.403: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.403: INFO: Container webserver ready: false, restart count 0 May 19 19:16:27.174: INFO: Latency metrics for node kind-worker2 May 19 19:16:27.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-5306" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sPersistentVolumes\-local\s\s\[Volume\stype\:\sdir\-bindmounted\]\sSet\sfsGroup\sfor\slocal\svolume\sshould\sset\sdifferent\sfsGroup\sfor\ssecond\spod\sif\sfirst\spod\sis\sdeleted$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 May 19 19:16:19.398: Unexpected error: <*errors.StatusError | 0xc0039b2f00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110from junit_12.xml
[BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:15:59.100: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename persistent-local-volumes-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:158 [BeforeEach] [Volume type: dir-bindmounted] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:195 �[1mSTEP�[0m: Initializing test volumes May 19 19:16:19.398: FAIL: Unexpected error: <*errors.StatusError | 0xc0039b2f00>: { ErrStatus: { TypeMeta: {Kind: "", APIVersion: ""}, ListMeta: { SelfLink: "", ResourceVersion: "", Continue: "", RemainingItemCount: nil, }, Status: "Failure", Message: "etcdserver: request timed out", Reason: "", Details: nil, Code: 500, }, } etcdserver: request timed out occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).launchNodeExecPod(0xc003934a10, 0xc0005f6305, 0xb, 0xb) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:110 +0x4b9 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).exec(0xc003934a10, 0xc002249380, 0xc9, 0xc001beb800, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:136 +0x3b5 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommandWithResult(0xc003934a10, 0xc002249380, 0xc9, 0xc001beb800, 0x3, 0xc9, 0xc002249380, 0xc0024c61a0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:169 +0x99 k8s.io/kubernetes/test/e2e/storage/utils.(*hostExecutor).IssueCommand(0xc003934a10, 0xc002249380, 0xc9, 0xc001beb800, 0x3, 0xc002249380) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/host_exec.go:178 +0x49 k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).setupLocalVolumeDirectoryBindMounted(0xc00448bda0, 0xc001beb800, 0x0, 0x7914d28) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:237 +0x14b k8s.io/kubernetes/test/e2e/storage/utils.(*ltrMgr).Create(0xc00448bda0, 0xc001beb800, 0x7094dfe, 0xf, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/local.go:300 +0x47b k8s.io/kubernetes/test/e2e/storage.setupLocalVolumes(0xc0024a1b00, 0x7094dfe, 0xf, 0xc001beb800, 0x1, 0x0, 0x0, 0xc0024d6300) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:837 +0x157 k8s.io/kubernetes/test/e2e/storage.setupLocalVolumesPVCsPVs(0xc0024a1b00, 0x7094dfe, 0xf, 0xc001beb800, 0x1, 0x707c957, 0x9, 0x0, 0x0, 0x0) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:1102 +0x87 k8s.io/kubernetes/test/e2e/storage.glob..func21.2.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:200 +0xb6 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc00022cc00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc00022cc00) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc00022cc00, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [Volume type: dir-bindmounted] /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/persistent_volumes-local.go:204 �[1mSTEP�[0m: Cleaning up PVC and PV [AfterEach] [sig-storage] PersistentVolumes-local /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "persistent-local-volumes-test-5786". �[1mSTEP�[0m: Found 4 events. May 19 19:16:24.054: INFO: At 2022-05-19 19:15:59 +0000 UTC - event for hostexec-kind-worker-dcvzn: {default-scheduler } Scheduled: Successfully assigned persistent-local-volumes-test-5786/hostexec-kind-worker-dcvzn to kind-worker May 19 19:16:24.054: INFO: At 2022-05-19 19:16:00 +0000 UTC - event for hostexec-kind-worker-dcvzn: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.32" already present on machine May 19 19:16:24.054: INFO: At 2022-05-19 19:16:00 +0000 UTC - event for hostexec-kind-worker-dcvzn: {kubelet kind-worker} Created: Created container agnhost-container May 19 19:16:24.054: INFO: At 2022-05-19 19:16:01 +0000 UTC - event for hostexec-kind-worker-dcvzn: {kubelet kind-worker} Started: Started container agnhost-container May 19 19:16:24.214: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:24.214: INFO: hostexec-kind-worker-dcvzn kind-worker Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-19 19:15:59 +0000 UTC }] May 19 19:16:24.214: INFO: May 19 19:16:24.236: INFO: Logging node info for node kind-control-plane May 19 19:16:24.310: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.311: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:24.416: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:24.526: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.526: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:24.526: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container etcd ready: true, restart count 0 May 19 19:16:24.526: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:24.526: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kube-controller-manager ready: true, restart count 0 May 19 19:16:24.526: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:24.526: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kube-scheduler ready: true, restart count 0 May 19 19:16:24.526: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container coredns ready: true, restart count 0 May 19 19:16:24.526: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:24.526: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:24.810: INFO: Latency metrics for node kind-control-plane May 19 19:16:24.810: INFO: Logging node info for node kind-worker May 19 19:16:24.959: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:24.959: INFO: Logging kubelet events for node kind-worker May 19 19:16:25.028: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:25.104: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container httpd ready: false, restart count 0 May 19 19:16:25.104: INFO: startup-script started at <nil> (0+0 container statuses recorded) May 19 19:16:25.104: INFO: ss2-2 started at <nil> (0+0 container statuses recorded) May 19 19:16:25.104: INFO: pod-secrets-7eedc752-ee2a-4887-bfa0-6b08ae1b03ad started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container secret-volume-test ready: false, restart count 0 May 19 19:16:25.104: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:25.104: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.104: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:25.104: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:25.104: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.104: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.104: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container agnhost-container ready: false, restart count 0 May 19 19:16:25.104: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:25.104: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.104: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:25.104: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:25.104: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:25.104: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.104: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:25.104: INFO: hostexec-kind-worker-dcvzn started at <nil> (0+0 container statuses recorded) May 19 19:16:25.104: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.104: INFO: configmap-client started at <nil> (0+0 container statuses recorded) May 19 19:16:25.104: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.104: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:25.104: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:25.104: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:25.104: INFO: Container mock ready: true, restart count 0 May 19 19:16:25.104: INFO: pod-subpath-test-projected-p9hs started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container test-container-subpath-projected-p9hs ready: true, restart count 0 May 19 19:16:25.104: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container webserver ready: true, restart count 0 May 19 19:16:25.104: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:25.104: INFO: up-down-1-wbz22 started at 2022-05-19 19:14:24 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container up-down-1 ready: true, restart count 0 May 19 19:16:25.104: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:25.104: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:25.104: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.074: INFO: Latency metrics for node kind-worker May 19 19:16:26.074: INFO: Logging node info for node kind-worker2 May 19 19:16:26.126: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:26.126: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:26.175: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:26.334: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:26.335: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container busybox ready: true, restart count 0 May 19 19:16:26.335: INFO: pod-22513bfc-35e4-450f-9936-d72b17f75f4c started at 2022-05-19 19:15:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container write-pod ready: true, restart count 0 May 19 19:16:26.335: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:26.335: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.335: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.335: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.335: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at <nil> (0+0 container statuses recorded) May 19 19:16:26.335: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:26.335: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.335: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:26.335: INFO: downwardapi-volume-cdb36836-a0b8-4038-ba01-38306a582019 started at 2022-05-19 19:16:03 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container client-container ready: false, restart count 0 May 19 19:16:26.335: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container webserver ready: true, restart count 0 May 19 19:16:26.335: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:26.335: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:26.335: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:26.335: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:26.335: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:26.335: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:26.335: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:26.335: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:26.335: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:26.335: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:26.335: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container donothing ready: false, restart count 0 May 19 19:16:26.335: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container c ready: true, restart count 0 May 19 19:16:26.335: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:26.335: INFO: Container webserver ready: false, restart count 0 May 19 19:16:26.967: INFO: Latency metrics for node kind-worker2 May 19 19:16:26.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "persistent-local-volumes-test-5786" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sVolumes\sConfigMap\sshould\sbe\smountable$'
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 May 19 19:16:30.228: Failed to create client pod: etcdserver: request timed out /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:511from junit_13.xml
[BeforeEach] [sig-storage] Volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client May 19 19:16:04.195: INFO: >>> kubeConfig: /root/.kube/kind-test-config �[1mSTEP�[0m: Building a namespace api object, basename volume �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:42 [It] should be mountable /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:48 �[1mSTEP�[0m: starting configmap-client �[1mSTEP�[0m: Deleting pod configmap-client in namespace volume-6067 May 19 19:16:24.054: INFO: Waiting for pod configmap-client to disappear May 19 19:16:24.218: INFO: Pod configmap-client still exists May 19 19:16:26.218: INFO: Waiting for pod configmap-client to disappear May 19 19:16:26.266: INFO: Pod configmap-client still exists May 19 19:16:28.218: INFO: Waiting for pod configmap-client to disappear May 19 19:16:28.222: INFO: Pod configmap-client still exists May 19 19:16:30.219: INFO: Waiting for pod configmap-client to disappear May 19 19:16:30.228: INFO: Pod configmap-client no longer exists May 19 19:16:30.228: FAIL: Failed to create client pod: etcdserver: request timed out Full Stack Trace k8s.io/kubernetes/test/e2e/framework/volume.TestVolumeClient(...) /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/volume/fixtures.go:511 k8s.io/kubernetes/test/e2e/storage.glob..func31.2.1() /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/volumes.go:111 +0x6bf k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c90780) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c90780) _output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000c90780, 0x72e36d8) /usr/local/go/src/testing/testing.go:1203 +0xe5 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1248 +0x2b3 [AfterEach] [sig-storage] Volumes /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "volume-6067". �[1mSTEP�[0m: Found 5 events. May 19 19:16:30.242: INFO: At 2022-05-19 19:16:04 +0000 UTC - event for configmap-client: {default-scheduler } Scheduled: Successfully assigned volume-6067/configmap-client to kind-worker May 19 19:16:30.242: INFO: At 2022-05-19 19:16:24 +0000 UTC - event for configmap-client: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-1" already present on machine May 19 19:16:30.242: INFO: At 2022-05-19 19:16:25 +0000 UTC - event for configmap-client: {kubelet kind-worker} Created: Created container configmap-client May 19 19:16:30.242: INFO: At 2022-05-19 19:16:25 +0000 UTC - event for configmap-client: {kubelet kind-worker} Started: Started container configmap-client May 19 19:16:30.242: INFO: At 2022-05-19 19:16:26 +0000 UTC - event for configmap-client: {kubelet kind-worker} Killing: Stopping container configmap-client May 19 19:16:30.245: INFO: POD NODE PHASE GRACE CONDITIONS May 19 19:16:30.245: INFO: May 19 19:16:30.248: INFO: Logging node info for node kind-control-plane May 19 19:16:30.250: INFO: Node Info: &Node{ObjectMeta:{kind-control-plane 7e5be5e1-f40a-4fa4-9c1e-ae1356a8d5d2 42765 0 2022-05-19 18:59:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-control-plane kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-05-19 18:59:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kube-controller-manager Update v1 2022-05-19 18:59:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-05-19 18:59:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-control-plane,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:14:49 +0000 UTC,LastTransitionTime:2022-05-19 18:59:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.4,},NodeAddress{Type:Hostname,Address:kind-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6da7820732414c09b805ad6b354ea130,SystemUUID:cd27b40e-6f1b-48bc-8ba8-36264df7de17,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:30.250: INFO: Logging kubelet events for node kind-control-plane May 19 19:16:30.254: INFO: Logging pods the kubelet thinks is on node kind-control-plane May 19 19:16:30.276: INFO: coredns-78fcd69978-szhjl started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.276: INFO: Container coredns ready: true, restart count 0 May 19 19:16:30.276: INFO: local-path-provisioner-6c9449b9dd-rq246 started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.276: INFO: Container local-path-provisioner ready: true, restart count 0 May 19 19:16:30.276: INFO: etcd-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.276: INFO: Container etcd ready: true, restart count 0 May 19 19:16:30.276: INFO: kube-apiserver-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.276: INFO: Container kube-apiserver ready: true, restart count 0 May 19 19:16:30.276: INFO: kube-controller-manager-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.276: INFO: Container kube-controller-manager ready: false, restart count 1 May 19 19:16:30.276: INFO: kindnet-sp68s started at 2022-05-19 18:59:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.276: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:30.276: INFO: kube-scheduler-kind-control-plane started at 2022-05-19 18:59:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.276: INFO: Container kube-scheduler ready: false, restart count 1 May 19 19:16:30.276: INFO: coredns-78fcd69978-79cfm started at 2022-05-19 18:59:48 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.276: INFO: Container coredns ready: true, restart count 0 May 19 19:16:30.276: INFO: kube-proxy-c8wmp started at 2022-05-19 18:59:52 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.276: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:30.365: INFO: Latency metrics for node kind-control-plane May 19 19:16:30.365: INFO: Logging node info for node kind-worker May 19 19:16:30.368: INFO: Node Info: &Node{ObjectMeta:{kind-worker 5aace22e-9461-4dd4-8842-d4c95088e6c2 46103 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7246":"csi-mock-csi-mock-volumes-7246"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}} } {kubeadm Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2022-05-19 19:15:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:15:56 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.3,},NodeAddress{Type:Hostname,Address:kind-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:23febdd7255b49db9d80d997950dd2f2,SystemUUID:09159bf9-dc54-4c7a-91f7-a2bdb5d0f9d7,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4ad90a11b55313b182afc186b9876c8e891531b8db4c9bf1541953021618d0e2 k8s.gcr.io/etcd:3.4.13-0],SizeBytes:86742272,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:e7fddbaac4c3451da2365ab90bad149d32f11409738034e41e0f460927f7c276 k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.4],SizeBytes:24757245,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:4051e85640c22f8e00c34dbd273576fc9e1e2829992656588062be9c0f69b04b k8s.gcr.io/e2e-test-images/nonroot:1.1],SizeBytes:17748448,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 19 19:16:30.369: INFO: Logging kubelet events for node kind-worker May 19 19:16:30.373: INFO: Logging pods the kubelet thinks is on node kind-worker May 19 19:16:30.383: INFO: up-down-2-prvvv started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:30.383: INFO: netserver-0 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.383: INFO: ss2-0 started at 2022-05-19 19:15:40 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.383: INFO: hostexec-kind-worker-9n75g started at 2022-05-19 19:16:01 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:30.383: INFO: startup-script started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container startup-script ready: true, restart count 0 May 19 19:16:30.383: INFO: ss2-2 started at 2022-05-19 19:15:59 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.383: INFO: test-new-deployment-847dcfb7fb-c4njf started at 2022-05-19 19:15:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container httpd ready: true, restart count 0 May 19 19:16:30.383: INFO: csi-mockplugin-resizer-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:30.383: INFO: test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.383: INFO: host-test-container-pod started at 2022-05-19 19:15:37 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:30.383: INFO: kindnet-4gdb4 started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.383: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:30.383: INFO: csi-mockplugin-attacher-0 started at 2022-05-19 19:14:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.384: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:30.384: INFO: test-container-pod started at 2022-05-19 19:15:43 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.384: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.384: INFO: up-down-2-gkj59 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.384: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:30.384: INFO: hostexec-kind-worker-f96wj started at 2022-05-19 19:15:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.384: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:30.384: INFO: kube-proxy-cv6pt started at 2022-05-19 18:59:55 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.384: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:30.384: INFO: pod-secrets-a23f9038-0c7a-436f-a040-8f745ef7d572 started at 2022-05-19 19:15:35 +0000 UTC (0+3 container statuses recorded) May 19 19:16:30.384: INFO: Container creates-volume-test ready: true, restart count 0 May 19 19:16:30.384: INFO: Container dels-volume-test ready: true, restart count 0 May 19 19:16:30.384: INFO: Container upds-volume-test ready: true, restart count 0 May 19 19:16:30.384: INFO: netserver-0 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.384: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.384: INFO: oidc-discovery-validator started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.384: INFO: Container oidc-discovery-validator ready: false, restart count 0 May 19 19:16:30.384: INFO: hostexec-kind-worker-dcvzn started at 2022-05-19 19:15:59 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.384: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:30.384: INFO: netserver-0 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.384: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.384: INFO: csi-mockplugin-0 started at 2022-05-19 19:14:14 +0000 UTC (0+3 container statuses recorded) May 19 19:16:30.384: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:30.384: INFO: Container driver-registrar ready: true, restart count 0 May 19 19:16:30.384: INFO: Container mock ready: true, restart count 0 May 19 19:16:30.560: INFO: Latency metrics for node kind-worker May 19 19:16:30.560: INFO: Logging node info for node kind-worker2 May 19 19:16:30.564: INFO: Node Info: &Node{ObjectMeta:{kind-worker2 7a16523d-7da3-4c78-89f3-8eb0caae50f1 46424 0 2022-05-19 18:59:44 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux io.kubernetes.storage.mock/node:some-mock-node kubernetes.io/arch:amd64 kubernetes.io/hostname:kind-worker2 kubernetes.io/os:linux topology.hostpath.csi/node:kind-worker2] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1152":"kind-worker2"} kubeadm.alpha.kubernetes.io/cri-socket:unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}} } {kubelet Update v1 2022-05-19 18:59:44 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:spec":{"f:providerID":{}}} } {kubeadm Update v1 2022-05-19 18:59:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}} } {kube-controller-manager Update v1 2022-05-19 19:15:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-05-19 19:16:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:kind://docker/kind/kind-worker2,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{8 0} {<nil>} 8 DecimalSI},ephemeral-storage: {{791327236096 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{54762434560 0} {<nil>} 53478940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:44 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-19 19:16:24 +0000 UTC,LastTransitionTime:2022-05-19 18:59:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.18.0.2,},NodeAddress{Type:Hostname,Address:kind-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:9e448859533f489fa0673b5d908c4c8a,SystemUUID:73bc70df-3537-48c2-a00e-91739ab5b72a,BootID:5dae428b-d063-4e2f-9327-89534e0ed1ad,KernelVersion:5.4.0-1065-gke,OSImage:Ubuntu 21.10,ContainerRuntimeVersion:containerd://1.6.4,KubeletVersion:v1.22.10-rc.0.21+1b1046d0845ea3,KubeProxyVersion:v1.22.10-rc.0.21+1b1046d0845ea3,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:5698c25d07bf911b696d3663697a0177faa3a0621f57ea08c491c9e5585904b2 k8s.gcr.io/kube-apiserver:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:129577427,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:1331479ec6d51cfd2b5b731402ac7315f2ee4290fddb988ba19eca3259734372 k8s.gcr.io/kube-controller-manager:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:123265849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:112029652,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner@sha256:8bc20b52ce066dd4ea3d9eaac40c04ea8a77f47c33789676580cf4c7c9ea3c3d k8s.gcr.io/e2e-test-images/glusterdynamic-provisioner:v1.0],SizeBytes:111199402,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:77faa706a9765ca5f1a13bda0a14f62fad365589d64076f9b5c6f8622fcb9ee5 k8s.gcr.io/kube-proxy:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:105430215,},ContainerImage{Names:[k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/library/import-2022-05-19@sha256:6862e078bbe86429ef2e358e78da00fc3b2a7d0e98cbc2a8dfae7c5425076121 k8s.gcr.io/kube-scheduler:v1.22.10-rc.0.21_1b1046d0845ea3],SizeBytes:53932856,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:50002177,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:49230179,},ContainerImage{Names:[docker.io/kindest/kindnetd:v20220510-4929dd75],SizeBytes:45239873,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:716d2f68314c5c4ddd5ecdb45183fcb4ed8019015982c1321571f863989b70b0 k8s.gcr.io/e2e-test-images/httpd:2.4.39-1],SizeBytes:41902332,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:40765006,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[docker.io/kindest/local-path-provisioner:v0.0.22-kind.0],SizeBytes:17375346,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:1b7c978a792a8fa4e96244e8059bd71bb49b07e2e5a897fb0c867bdc6db20d5d k8s.gcr.io/sig-storage/livenessprobe:v2.3.0],SizeBytes:7933739,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:6979365,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/kindest/local-path-helper:v20220512-507ff70b],SizeBytes:2859518,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-1152^11ea9557-d7a8-11ec-b978-a6428bea170d,DevicePath:,},},Config:nil,},} May 19 19:16:30.565: INFO: Logging kubelet events for node kind-worker2 May 19 19:16:30.569: INFO: Logging pods the kubelet thinks is on node kind-worker2 May 19 19:16:30.580: INFO: kube-proxy-wgjrm started at 2022-05-19 18:59:58 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container kube-proxy ready: true, restart count 0 May 19 19:16:30.580: INFO: hostexec-kind-worker2-86jcw started at 2022-05-19 19:15:26 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:30.580: INFO: ss2-1 started at 2022-05-19 19:15:51 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.580: INFO: netserver-1 started at 2022-05-19 19:14:57 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.580: INFO: pod-secrets-bab7a309-2f16-455c-99ff-6d1894bc83b0 started at 2022-05-19 19:16:04 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container secret-test ready: true, restart count 0 May 19 19:16:30.580: INFO: up-down-2-5zs72 started at 2022-05-19 19:14:42 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container up-down-2 ready: true, restart count 0 May 19 19:16:30.580: INFO: ss-0 started at 2022-05-19 19:15:29 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.580: INFO: csi-hostpathplugin-0 started at 2022-05-19 19:15:33 +0000 UTC (0+7 container statuses recorded) May 19 19:16:30.580: INFO: Container csi-attacher ready: true, restart count 0 May 19 19:16:30.580: INFO: Container csi-provisioner ready: true, restart count 0 May 19 19:16:30.580: INFO: Container csi-resizer ready: true, restart count 0 May 19 19:16:30.580: INFO: Container csi-snapshotter ready: true, restart count 0 May 19 19:16:30.580: INFO: Container hostpath ready: true, restart count 0 May 19 19:16:30.580: INFO: Container liveness-probe ready: true, restart count 0 May 19 19:16:30.580: INFO: Container node-driver-registrar ready: true, restart count 0 May 19 19:16:30.580: INFO: liveness-188ed752-b6d5-4d2d-8753-f3495434988f started at 2022-05-19 19:14:22 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container agnhost-container ready: false, restart count 4 May 19 19:16:30.580: INFO: netserver-1 started at 2022-05-19 19:14:53 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.580: INFO: boom-server started at 2022-05-19 19:15:50 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container boom-server ready: true, restart count 0 May 19 19:16:30.580: INFO: hostexec-kind-worker2-vfx8t started at 2022-05-19 19:15:14 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container agnhost-container ready: true, restart count 0 May 19 19:16:30.580: INFO: rs-jmlhb started at 2022-05-19 19:15:20 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container donothing ready: false, restart count 0 May 19 19:16:30.580: INFO: forbid-27549796-5gbns started at 2022-05-19 19:16:00 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container c ready: true, restart count 0 May 19 19:16:30.580: INFO: netserver-1 started at 2022-05-19 19:15:47 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container webserver ready: true, restart count 0 May 19 19:16:30.580: INFO: busybox-91f9a5ba-0aeb-4445-a18c-ab8e7ec56a1a started at 2022-05-19 19:15:23 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container busybox ready: true, restart count 0 May 19 19:16:30.580: INFO: kindnet-jk9nv started at 2022-05-19 18:59:45 +0000 UTC (0+1 container statuses recorded) May 19 19:16:30.580: INFO: Container kindnet-cni ready: true, restart count 0 May 19 19:16:30.726: INFO: Latency metrics for node kind-worker2 May 19 19:16:30.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volume-6067" for this suite.
Filter through log files
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname
Kubernetes e2e suite [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should ensure a single API token exists
Kubernetes e2e suite [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to up and down services
Kubernetes e2e suite [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [sig-node] NodeLease when the NodeLease feature is enabled should have OwnerReferences set
Kubernetes e2e suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [sig-node] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
Kubernetes e2e suite [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [sig-node] Pods Extended Pod Container Status should never report container start when an init container fails
Kubernetes e2e suite [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [sig-node] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
Kubernetes e2e suite [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, immediate binding
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity unlimited
Kubernetes e2e suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning GlusterDynamicProvisioner should create and delete persistent volumes [fast]
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod
Kubernetes e2e suite [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately
Kubernetes e2e suite [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [sig-apps] StatefulSet MinReadySeconds should be honored when enabled [Feature:StatefulSetMinReadySeconds] [alpha]
Kubernetes e2e suite [sig-apps] [Feature:TTLAfterFinished] job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should allow pods under the privileged policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should enforce the restricted policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should forbid pod creation when no PSP is available
Kubernetes e2e suite [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should prevent Ingress creation if more than 1 IngressClass marked as default [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [sig-network] Loadbalancing: L7 [Slow] Nginx should conform to Ingress spec
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API with endport field [Feature:NetworkPolicyEndPort]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [sig-network] SCTP [Feature:SCTP] [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [sig-network] SCTP [Feature:SCTP] [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [sig-network] SCTP [Feature:SCTP] [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 node podCIDRs [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning shoul